Test Report: Docker_Linux 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 72.54
x
+
TestAddons/parallel/Registry (72.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.139536ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-t6q8h" [4342257a-a438-4180-ab98-bcb513d0521a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006037787s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ljn4r" [0fd59917-12f5-4b55-b4ed-fb31a0b82ca1] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003798179s
addons_test.go:338: (dbg) Run:  kubectl --context addons-305811 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-305811 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-305811 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07642233s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-305811 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 ip
2024/09/27 00:28:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-305811
helpers_test.go:235: (dbg) docker inspect addons-305811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306",
	        "Created": "2024-09-27T00:15:06.295749015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 542069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:15:06.403841033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fba5f082b59effd6acfcb1eed3d3f86a23bd3a65463877f8197a730d49f52a09",
	        "ResolvConfPath": "/var/lib/docker/containers/b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306/hosts",
	        "LogPath": "/var/lib/docker/containers/b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306/b0fc4ba1a814db3175b24d76f019687a53c962f2054766b0fe2ff0e720ee2306-json.log",
	        "Name": "/addons-305811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-305811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-305811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/13e03c5de357f6713a33ab6d7d8c892e1ba4f9690bf7c0d644ca9839295c966c-init/diff:/var/lib/docker/overlay2/4dc0ad7fbb75d47f911966c6e6e6fb4593dffb8a20f85d19010fe87cbb979de1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13e03c5de357f6713a33ab6d7d8c892e1ba4f9690bf7c0d644ca9839295c966c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13e03c5de357f6713a33ab6d7d8c892e1ba4f9690bf7c0d644ca9839295c966c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13e03c5de357f6713a33ab6d7d8c892e1ba4f9690bf7c0d644ca9839295c966c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-305811",
	                "Source": "/var/lib/docker/volumes/addons-305811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-305811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-305811",
	                "name.minikube.sigs.k8s.io": "addons-305811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6822ae25ddc572ca9d814933bb30630c108946a85ee45ccd12c230d1a12e353",
	            "SandboxKey": "/var/run/docker/netns/e6822ae25ddc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-305811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8bbc91b9dd6f7576471064010bc793765e4619e45aa40707687bf68ed9e49dc8",
	                    "EndpointID": "a85edd26b207167cfd2a8e0c6d4b80c68de439d545f06dfd4a8daeb5d05c495e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-305811",
	                        "b0fc4ba1a814"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-305811 -n addons-305811
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-851602 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | download-docker-851602                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-851602                                                                   | download-docker-851602 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-456680   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | binary-mirror-456680                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45515                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-456680                                                                     | binary-mirror-456680   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| addons  | disable dashboard -p                                                                        | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-305811                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-305811                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-305811 --wait=true                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:18 UTC | 27 Sep 24 00:18 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | addons-305811                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:27 UTC |
	|         | addons-305811                                                                               |                        |         |         |                     |                     |
	| addons  | addons-305811 addons                                                                        | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | -p addons-305811                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-305811 ssh curl -s                                                                   | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-305811 ip                                                                            | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-305811 ssh cat                                                                       | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | /opt/local-path-provisioner/pvc-5e00d761-eca2-4989-8669-0ec284d57222_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | -p addons-305811                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-305811 addons                                                                        | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-305811 addons                                                                        | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-305811 ip                                                                            | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	| addons  | addons-305811 addons disable                                                                | addons-305811          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:42.874879  541317 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:42.875039  541317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:42.875050  541317 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:42.875055  541317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:42.875239  541317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:14:42.875885  541317 out.go:352] Setting JSON to false
	I0927 00:14:42.876800  541317 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7026,"bootTime":1727389057,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:42.876916  541317 start.go:139] virtualization: kvm guest
	I0927 00:14:42.879178  541317 out.go:177] * [addons-305811] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:14:42.880402  541317 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:14:42.880467  541317 notify.go:220] Checking for updates...
	I0927 00:14:42.882818  541317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:42.884166  541317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:14:42.885358  541317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	I0927 00:14:42.886804  541317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:14:42.888482  541317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:14:42.889930  541317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:42.914031  541317 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:42.914165  541317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:42.962647  541317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:14:42.953373064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:42.962764  541317 docker.go:318] overlay module found
	I0927 00:14:42.964991  541317 out.go:177] * Using the docker driver based on user configuration
	I0927 00:14:42.966455  541317 start.go:297] selected driver: docker
	I0927 00:14:42.966477  541317 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:42.966491  541317 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:14:42.967313  541317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:43.016107  541317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:14:43.007187351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:43.016335  541317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:43.016675  541317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:14:43.018772  541317 out.go:177] * Using Docker driver with root privileges
	I0927 00:14:43.020326  541317 cni.go:84] Creating CNI manager for ""
	I0927 00:14:43.020408  541317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:14:43.020423  541317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:14:43.020506  541317 start.go:340] cluster config:
	{Name:addons-305811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-305811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:43.021975  541317 out.go:177] * Starting "addons-305811" primary control-plane node in "addons-305811" cluster
	I0927 00:14:43.023428  541317 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 00:14:43.025082  541317 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:14:43.026892  541317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:43.026950  541317 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0927 00:14:43.026961  541317 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:43.027041  541317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:14:43.027062  541317 preload.go:172] Found /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0927 00:14:43.027075  541317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 00:14:43.027437  541317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/config.json ...
	I0927 00:14:43.027463  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/config.json: {Name:mk13dc21de4947e0e409307495ba0f692ef49159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:14:43.043300  541317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:14:43.043436  541317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:14:43.043459  541317 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:14:43.043466  541317 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:14:43.043474  541317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:14:43.043481  541317 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:14:54.833235  541317 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:14:54.833286  541317 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:14:54.833346  541317 start.go:360] acquireMachinesLock for addons-305811: {Name:mkb8e8e2a6b461c4d6f042b83b27a14d3104a472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:54.833478  541317 start.go:364] duration metric: took 102.903µs to acquireMachinesLock for "addons-305811"
	I0927 00:14:54.833507  541317 start.go:93] Provisioning new machine with config: &{Name:addons-305811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-305811 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:14:54.833614  541317 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:14:54.835534  541317 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:14:54.835856  541317 start.go:159] libmachine.API.Create for "addons-305811" (driver="docker")
	I0927 00:14:54.835905  541317 client.go:168] LocalClient.Create starting
	I0927 00:14:54.836041  541317 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem
	I0927 00:14:54.998339  541317 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/cert.pem
	I0927 00:14:55.132697  541317 cli_runner.go:164] Run: docker network inspect addons-305811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:14:55.148351  541317 cli_runner.go:211] docker network inspect addons-305811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:14:55.148431  541317 network_create.go:284] running [docker network inspect addons-305811] to gather additional debugging logs...
	I0927 00:14:55.148448  541317 cli_runner.go:164] Run: docker network inspect addons-305811
	W0927 00:14:55.164361  541317 cli_runner.go:211] docker network inspect addons-305811 returned with exit code 1
	I0927 00:14:55.164397  541317 network_create.go:287] error running [docker network inspect addons-305811]: docker network inspect addons-305811: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-305811 not found
	I0927 00:14:55.164429  541317 network_create.go:289] output of [docker network inspect addons-305811]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-305811 not found
	
	** /stderr **
	I0927 00:14:55.164607  541317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:14:55.181113  541317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019a6b90}
	I0927 00:14:55.181166  541317 network_create.go:124] attempt to create docker network addons-305811 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:14:55.181217  541317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-305811 addons-305811
	I0927 00:14:55.241203  541317 network_create.go:108] docker network addons-305811 192.168.49.0/24 created
	I0927 00:14:55.241241  541317 kic.go:121] calculated static IP "192.168.49.2" for the "addons-305811" container
	I0927 00:14:55.241342  541317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:14:55.257737  541317 cli_runner.go:164] Run: docker volume create addons-305811 --label name.minikube.sigs.k8s.io=addons-305811 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:14:55.274468  541317 oci.go:103] Successfully created a docker volume addons-305811
	I0927 00:14:55.274592  541317 cli_runner.go:164] Run: docker run --rm --name addons-305811-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-305811 --entrypoint /usr/bin/test -v addons-305811:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:15:02.247169  541317 cli_runner.go:217] Completed: docker run --rm --name addons-305811-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-305811 --entrypoint /usr/bin/test -v addons-305811:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (6.972466624s)
	I0927 00:15:02.247203  541317 oci.go:107] Successfully prepared a docker volume addons-305811
	I0927 00:15:02.247220  541317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:02.247243  541317 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:15:02.247297  541317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-305811:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:15:06.236595  541317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-305811:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.989233332s)
	I0927 00:15:06.236631  541317 kic.go:203] duration metric: took 3.989384847s to extract preloaded images to volume ...
	W0927 00:15:06.236754  541317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:15:06.236862  541317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:15:06.281080  541317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-305811 --name addons-305811 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-305811 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-305811 --network addons-305811 --ip 192.168.49.2 --volume addons-305811:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:15:06.568660  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Running}}
	I0927 00:15:06.586822  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:06.604706  541317 cli_runner.go:164] Run: docker exec addons-305811 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:15:06.646504  541317 oci.go:144] the created container "addons-305811" has a running status.
	I0927 00:15:06.646557  541317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa...
	I0927 00:15:06.867351  541317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:15:06.890410  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:06.913730  541317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:15:06.913751  541317 kic_runner.go:114] Args: [docker exec --privileged addons-305811 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:15:06.961885  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:06.979384  541317 machine.go:93] provisionDockerMachine start ...
	I0927 00:15:06.979479  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:06.996901  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:06.997145  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:06.997163  541317 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:15:07.187663  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-305811
	
	I0927 00:15:07.187691  541317 ubuntu.go:169] provisioning hostname "addons-305811"
	I0927 00:15:07.187749  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:07.207153  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:07.207381  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:07.207399  541317 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-305811 && echo "addons-305811" | sudo tee /etc/hostname
	I0927 00:15:07.339592  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-305811
	
	I0927 00:15:07.339677  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:07.360474  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:07.360678  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:07.360695  541317 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-305811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-305811/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-305811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:15:07.480680  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:15:07.480721  541317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-533157/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-533157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-533157/.minikube}
	I0927 00:15:07.480752  541317 ubuntu.go:177] setting up certificates
	I0927 00:15:07.480768  541317 provision.go:84] configureAuth start
	I0927 00:15:07.480831  541317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-305811
	I0927 00:15:07.497713  541317 provision.go:143] copyHostCerts
	I0927 00:15:07.497816  541317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-533157/.minikube/ca.pem (1078 bytes)
	I0927 00:15:07.498002  541317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-533157/.minikube/cert.pem (1123 bytes)
	I0927 00:15:07.498090  541317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-533157/.minikube/key.pem (1675 bytes)
	I0927 00:15:07.498181  541317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-533157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca-key.pem org=jenkins.addons-305811 san=[127.0.0.1 192.168.49.2 addons-305811 localhost minikube]
	I0927 00:15:07.549587  541317 provision.go:177] copyRemoteCerts
	I0927 00:15:07.549653  541317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:15:07.549701  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:07.566529  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:07.653088  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:15:07.675878  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:15:07.698657  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:15:07.721818  541317 provision.go:87] duration metric: took 241.032675ms to configureAuth
	I0927 00:15:07.721857  541317 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:15:07.722028  541317 config.go:182] Loaded profile config "addons-305811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:07.722075  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:07.739255  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:07.739488  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:07.739509  541317 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 00:15:07.852757  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0927 00:15:07.852787  541317 ubuntu.go:71] root file system type: overlay
	I0927 00:15:07.852935  541317 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 00:15:07.853009  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:07.870666  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:07.870893  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:07.870984  541317 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 00:15:07.995367  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 00:15:07.995444  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:08.013112  541317 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:08.013312  541317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0927 00:15:08.013331  541317 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 00:15:08.692705  541317 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:29.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-27 00:15:07.990112941 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0927 00:15:08.692747  541317 machine.go:96] duration metric: took 1.713335936s to provisionDockerMachine
	I0927 00:15:08.692762  541317 client.go:171] duration metric: took 13.856847778s to LocalClient.Create
	I0927 00:15:08.692786  541317 start.go:167] duration metric: took 13.856934956s to libmachine.API.Create "addons-305811"
	I0927 00:15:08.692800  541317 start.go:293] postStartSetup for "addons-305811" (driver="docker")
	I0927 00:15:08.692814  541317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:15:08.692888  541317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:15:08.692938  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:08.710050  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:08.797311  541317 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:15:08.800330  541317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:15:08.800362  541317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:15:08.800370  541317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:15:08.800377  541317 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:15:08.800390  541317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-533157/.minikube/addons for local assets ...
	I0927 00:15:08.800451  541317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-533157/.minikube/files for local assets ...
	I0927 00:15:08.800476  541317 start.go:296] duration metric: took 107.670076ms for postStartSetup
	I0927 00:15:08.800743  541317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-305811
	I0927 00:15:08.817059  541317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/config.json ...
	I0927 00:15:08.817362  541317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:15:08.817416  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:08.834425  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:08.917036  541317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:15:08.921474  541317 start.go:128] duration metric: took 14.087836953s to createHost
	I0927 00:15:08.921504  541317 start.go:83] releasing machines lock for "addons-305811", held for 14.088011829s
	I0927 00:15:08.921583  541317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-305811
	I0927 00:15:08.939627  541317 ssh_runner.go:195] Run: cat /version.json
	I0927 00:15:08.939692  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:08.939704  541317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:15:08.939766  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:08.956987  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:08.957182  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:09.036230  541317 ssh_runner.go:195] Run: systemctl --version
	I0927 00:15:09.109174  541317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:15:09.113623  541317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 00:15:09.137561  541317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:15:09.137656  541317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:15:09.164423  541317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:15:09.164450  541317 start.go:495] detecting cgroup driver to use...
	I0927 00:15:09.164483  541317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:09.164598  541317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:09.179488  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 00:15:09.188769  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 00:15:09.197899  541317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 00:15:09.197968  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 00:15:09.207357  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:09.216703  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 00:15:09.226398  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:09.236225  541317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:15:09.245176  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 00:15:09.254280  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 00:15:09.263474  541317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 00:15:09.272720  541317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:15:09.280412  541317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:15:09.288326  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:09.367383  541317 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 00:15:09.453885  541317 start.go:495] detecting cgroup driver to use...
	I0927 00:15:09.453939  541317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:09.453994  541317 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 00:15:09.465497  541317 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0927 00:15:09.465557  541317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 00:15:09.476506  541317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:09.491983  541317 ssh_runner.go:195] Run: which cri-dockerd
	I0927 00:15:09.495298  541317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 00:15:09.503981  541317 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 00:15:09.530251  541317 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 00:15:09.628943  541317 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 00:15:09.721580  541317 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 00:15:09.721729  541317 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 00:15:09.739372  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:09.822180  541317 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 00:15:10.081820  541317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 00:15:10.092750  541317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:10.103786  541317 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 00:15:10.192212  541317 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 00:15:10.275471  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:10.351786  541317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 00:15:10.364378  541317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:10.374760  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:10.447567  541317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 00:15:10.509398  541317 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 00:15:10.509476  541317 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 00:15:10.513456  541317 start.go:563] Will wait 60s for crictl version
	I0927 00:15:10.513534  541317 ssh_runner.go:195] Run: which crictl
	I0927 00:15:10.516854  541317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:15:10.550364  541317 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 00:15:10.550433  541317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:10.575221  541317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:10.601621  541317 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 00:15:10.601722  541317 cli_runner.go:164] Run: docker network inspect addons-305811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:15:10.618312  541317 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:15:10.621916  541317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:10.632420  541317 kubeadm.go:883] updating cluster {Name:addons-305811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-305811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:15:10.632578  541317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:10.632643  541317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:10.652338  541317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 00:15:10.652360  541317 docker.go:615] Images already preloaded, skipping extraction
	I0927 00:15:10.652410  541317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:10.671876  541317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 00:15:10.671902  541317 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:15:10.671914  541317 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0927 00:15:10.672017  541317 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-305811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-305811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:15:10.672091  541317 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 00:15:10.718306  541317 cni.go:84] Creating CNI manager for ""
	I0927 00:15:10.718346  541317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:10.718364  541317 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:15:10.718392  541317 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-305811 NodeName:addons-305811 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:15:10.718546  541317 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-305811"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:15:10.718613  541317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:15:10.727392  541317 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:15:10.727462  541317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:15:10.736145  541317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 00:15:10.753066  541317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:15:10.769898  541317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0927 00:15:10.788142  541317 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:15:10.791585  541317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:10.801672  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:10.885732  541317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:10.899978  541317 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811 for IP: 192.168.49.2
	I0927 00:15:10.900000  541317 certs.go:194] generating shared ca certs ...
	I0927 00:15:10.900018  541317 certs.go:226] acquiring lock for ca certs: {Name:mkc81b3bb6b708f82be3877f9c1578c3a8a5359c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:10.900151  541317 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-533157/.minikube/ca.key
	I0927 00:15:10.991114  541317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-533157/.minikube/ca.crt ...
	I0927 00:15:10.991148  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/ca.crt: {Name:mk58b747beb33f905f27b904504ff71bf5ab6633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:10.991349  541317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-533157/.minikube/ca.key ...
	I0927 00:15:10.991368  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/ca.key: {Name:mkde7cddebc6b0ee8713d23a6c6d186b75d84913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:10.991473  541317 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.key
	I0927 00:15:11.135894  541317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.crt ...
	I0927 00:15:11.135934  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.crt: {Name:mk8f70a6b358dfa5f84d61ce900d3d7de1e32299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.136140  541317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.key ...
	I0927 00:15:11.136154  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.key: {Name:mk20507882def3c6664e1715f87527afeaf4b28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.136275  541317 certs.go:256] generating profile certs ...
	I0927 00:15:11.136345  541317 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.key
	I0927 00:15:11.136360  541317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt with IP's: []
	I0927 00:15:11.176358  541317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt ...
	I0927 00:15:11.176389  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: {Name:mkff39f93fab70eaa7f8bf418505d3479a4f47fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.176588  541317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.key ...
	I0927 00:15:11.176602  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.key: {Name:mk8c301144e8f38434f17c5f0206d6f241657ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.176696  541317 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key.db14e583
	I0927 00:15:11.176717  541317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt.db14e583 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:15:11.272848  541317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt.db14e583 ...
	I0927 00:15:11.272889  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt.db14e583: {Name:mk2ca275f254f6f72b3bab500789d2c3340786eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.273092  541317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key.db14e583 ...
	I0927 00:15:11.273109  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key.db14e583: {Name:mk286acc13044fb0c6f747e485836cad32617da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.273231  541317 certs.go:381] copying /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt.db14e583 -> /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt
	I0927 00:15:11.273332  541317 certs.go:385] copying /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key.db14e583 -> /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key
	I0927 00:15:11.273402  541317 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.key
	I0927 00:15:11.273428  541317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.crt with IP's: []
	I0927 00:15:11.522580  541317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.crt ...
	I0927 00:15:11.522616  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.crt: {Name:mkdb44ebcccf072bcf9a8d7983555e78eece4cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.522810  541317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.key ...
	I0927 00:15:11.522826  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.key: {Name:mkd0913639c06e56bc005a2c042c388f24c0e0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:11.523028  541317 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 00:15:11.523075  541317 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:15:11.523115  541317 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:15:11.523151  541317 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-533157/.minikube/certs/key.pem (1675 bytes)
	I0927 00:15:11.523807  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:15:11.547254  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:15:11.569424  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:15:11.592986  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:15:11.616521  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:15:11.639097  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:15:11.662284  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:15:11.685102  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:15:11.707890  541317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-533157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:15:11.732635  541317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:15:11.749780  541317 ssh_runner.go:195] Run: openssl version
	I0927 00:15:11.754916  541317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:15:11.764172  541317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:11.767528  541317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:11.767595  541317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:11.773941  541317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:15:11.782806  541317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:15:11.785965  541317 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:15:11.786010  541317 kubeadm.go:392] StartCluster: {Name:addons-305811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-305811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:11.786122  541317 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 00:15:11.803660  541317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:15:11.811930  541317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:15:11.820192  541317 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:15:11.820273  541317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:15:11.828478  541317 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:15:11.828499  541317 kubeadm.go:157] found existing configuration files:
	
	I0927 00:15:11.828545  541317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:15:11.836617  541317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:15:11.836671  541317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:15:11.844589  541317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:15:11.852470  541317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:15:11.852526  541317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:15:11.860342  541317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:15:11.868650  541317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:15:11.868711  541317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:15:11.876651  541317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:15:11.884346  541317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:15:11.884418  541317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:15:11.892420  541317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:15:11.926539  541317 kubeadm.go:310] W0927 00:15:11.925835    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:11.927040  541317 kubeadm.go:310] W0927 00:15:11.926476    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:11.947882  541317 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0927 00:15:12.002021  541317 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:15:21.638277  541317 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:15:21.638332  541317 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:15:21.638399  541317 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:15:21.638446  541317 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0927 00:15:21.638483  541317 kubeadm.go:310] OS: Linux
	I0927 00:15:21.638522  541317 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:15:21.638575  541317 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:15:21.638621  541317 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:15:21.638778  541317 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:15:21.638896  541317 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:15:21.639023  541317 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:15:21.639126  541317 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:15:21.639196  541317 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:15:21.639270  541317 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:15:21.639370  541317 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:15:21.639519  541317 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:15:21.639647  541317 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:15:21.639733  541317 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:15:21.642354  541317 out.go:235]   - Generating certificates and keys ...
	I0927 00:15:21.642488  541317 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:15:21.642608  541317 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:15:21.642695  541317 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:15:21.642771  541317 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:15:21.642833  541317 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:15:21.642887  541317 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:15:21.642930  541317 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:15:21.643025  541317 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-305811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:15:21.643069  541317 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:15:21.643170  541317 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-305811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:15:21.643262  541317 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:15:21.643360  541317 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:15:21.643405  541317 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:15:21.643454  541317 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:15:21.643499  541317 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:15:21.643566  541317 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:15:21.643698  541317 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:15:21.643788  541317 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:15:21.643837  541317 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:15:21.643911  541317 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:15:21.643991  541317 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:15:21.645278  541317 out.go:235]   - Booting up control plane ...
	I0927 00:15:21.645378  541317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:15:21.645449  541317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:15:21.645528  541317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:15:21.645618  541317 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:15:21.645704  541317 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:15:21.645749  541317 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:15:21.645857  541317 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:15:21.645941  541317 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:15:21.645988  541317 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001767541s
	I0927 00:15:21.646049  541317 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:15:21.646098  541317 kubeadm.go:310] [api-check] The API server is healthy after 5.001439361s
	I0927 00:15:21.646254  541317 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:15:21.646397  541317 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:15:21.646467  541317 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:15:21.646642  541317 kubeadm.go:310] [mark-control-plane] Marking the node addons-305811 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:15:21.646729  541317 kubeadm.go:310] [bootstrap-token] Using token: 5vycpi.1lutxmqgxzlgar3f
	I0927 00:15:21.648298  541317 out.go:235]   - Configuring RBAC rules ...
	I0927 00:15:21.648440  541317 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:15:21.648541  541317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:15:21.648732  541317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:15:21.648912  541317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:15:21.649054  541317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:15:21.649138  541317 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:15:21.649288  541317 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:15:21.649335  541317 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:15:21.649374  541317 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:15:21.649380  541317 kubeadm.go:310] 
	I0927 00:15:21.649471  541317 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:15:21.649484  541317 kubeadm.go:310] 
	I0927 00:15:21.649591  541317 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:15:21.649602  541317 kubeadm.go:310] 
	I0927 00:15:21.649638  541317 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:15:21.649714  541317 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:15:21.649771  541317 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:15:21.649778  541317 kubeadm.go:310] 
	I0927 00:15:21.649837  541317 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:15:21.649851  541317 kubeadm.go:310] 
	I0927 00:15:21.649926  541317 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:15:21.649940  541317 kubeadm.go:310] 
	I0927 00:15:21.650013  541317 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:15:21.650103  541317 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:15:21.650158  541317 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:15:21.650164  541317 kubeadm.go:310] 
	I0927 00:15:21.650296  541317 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:15:21.650443  541317 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:15:21.650460  541317 kubeadm.go:310] 
	I0927 00:15:21.650631  541317 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5vycpi.1lutxmqgxzlgar3f \
	I0927 00:15:21.650788  541317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac657e52bff6dca96e59ceffca02c195587919be0fe004213304b18f9fdc454d \
	I0927 00:15:21.650826  541317 kubeadm.go:310] 	--control-plane 
	I0927 00:15:21.650836  541317 kubeadm.go:310] 
	I0927 00:15:21.650934  541317 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:15:21.650952  541317 kubeadm.go:310] 
	I0927 00:15:21.651030  541317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5vycpi.1lutxmqgxzlgar3f \
	I0927 00:15:21.651174  541317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac657e52bff6dca96e59ceffca02c195587919be0fe004213304b18f9fdc454d 
	I0927 00:15:21.651211  541317 cni.go:84] Creating CNI manager for ""
	I0927 00:15:21.651251  541317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:21.652796  541317 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:15:21.654031  541317 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:15:21.663297  541317 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:15:21.681417  541317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:15:21.681529  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:21.681529  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-305811 minikube.k8s.io/updated_at=2024_09_27T00_15_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-305811 minikube.k8s.io/primary=true
	I0927 00:15:21.759710  541317 ops.go:34] apiserver oom_adj: -16
	I0927 00:15:21.759857  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:22.260437  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:22.760043  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:23.260517  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:23.760061  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:24.260702  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:24.760800  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:25.260207  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:25.760385  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:26.260349  541317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:26.375448  541317 kubeadm.go:1113] duration metric: took 4.693987598s to wait for elevateKubeSystemPrivileges
	I0927 00:15:26.375488  541317 kubeadm.go:394] duration metric: took 14.58948217s to StartCluster
	I0927 00:15:26.375508  541317 settings.go:142] acquiring lock: {Name:mk8b12532e2edf4c5c47d5e287e09556e5b5a1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.375611  541317 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:15:26.375984  541317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-533157/kubeconfig: {Name:mk960a2d635e486d4b21ea2b220ee956331e5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.376163  541317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:15:26.376180  541317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:26.376311  541317 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:15:26.376414  541317 config.go:182] Loaded profile config "addons-305811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:26.376432  541317 addons.go:69] Setting cloud-spanner=true in profile "addons-305811"
	I0927 00:15:26.376449  541317 addons.go:234] Setting addon cloud-spanner=true in "addons-305811"
	I0927 00:15:26.376417  541317 addons.go:69] Setting yakd=true in profile "addons-305811"
	I0927 00:15:26.376487  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.376491  541317 addons.go:234] Setting addon yakd=true in "addons-305811"
	I0927 00:15:26.376520  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.377049  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377071  541317 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-305811"
	I0927 00:15:26.377106  541317 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-305811"
	I0927 00:15:26.377129  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.377145  541317 addons.go:69] Setting default-storageclass=true in profile "addons-305811"
	I0927 00:15:26.377175  541317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-305811"
	I0927 00:15:26.377234  541317 addons.go:69] Setting storage-provisioner=true in profile "addons-305811"
	I0927 00:15:26.377259  541317 addons.go:234] Setting addon storage-provisioner=true in "addons-305811"
	I0927 00:15:26.377289  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.377483  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377494  541317 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-305811"
	I0927 00:15:26.377515  541317 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-305811"
	I0927 00:15:26.377569  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377728  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377767  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377762  541317 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-305811"
	I0927 00:15:26.377787  541317 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-305811"
	I0927 00:15:26.377056  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377818  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.377944  541317 addons.go:69] Setting registry=true in profile "addons-305811"
	I0927 00:15:26.377964  541317 addons.go:234] Setting addon registry=true in "addons-305811"
	I0927 00:15:26.377989  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.378158  541317 addons.go:69] Setting volumesnapshots=true in profile "addons-305811"
	I0927 00:15:26.378203  541317 addons.go:234] Setting addon volumesnapshots=true in "addons-305811"
	I0927 00:15:26.378245  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.378279  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.378467  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.378788  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.377132  541317 addons.go:69] Setting metrics-server=true in profile "addons-305811"
	I0927 00:15:26.378897  541317 addons.go:234] Setting addon metrics-server=true in "addons-305811"
	I0927 00:15:26.379092  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.379315  541317 addons.go:69] Setting inspektor-gadget=true in profile "addons-305811"
	I0927 00:15:26.379348  541317 addons.go:234] Setting addon inspektor-gadget=true in "addons-305811"
	I0927 00:15:26.379382  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.379816  541317 out.go:177] * Verifying Kubernetes components...
	I0927 00:15:26.377478  541317 addons.go:69] Setting volcano=true in profile "addons-305811"
	I0927 00:15:26.380107  541317 addons.go:234] Setting addon volcano=true in "addons-305811"
	I0927 00:15:26.380146  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.380641  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.380785  541317 addons.go:69] Setting ingress=true in profile "addons-305811"
	I0927 00:15:26.380823  541317 addons.go:234] Setting addon ingress=true in "addons-305811"
	I0927 00:15:26.379823  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.380949  541317 addons.go:69] Setting ingress-dns=true in profile "addons-305811"
	I0927 00:15:26.380973  541317 addons.go:234] Setting addon ingress-dns=true in "addons-305811"
	I0927 00:15:26.381004  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.381234  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.380903  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.381915  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.383704  541317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:26.380916  541317 addons.go:69] Setting gcp-auth=true in profile "addons-305811"
	I0927 00:15:26.385493  541317 mustload.go:65] Loading cluster: addons-305811
	I0927 00:15:26.385713  541317 config.go:182] Loaded profile config "addons-305811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:26.385978  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.404356  541317 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:15:26.406960  541317 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:26.406985  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:15:26.407042  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.417258  541317 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-305811"
	I0927 00:15:26.417299  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.417613  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.418074  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.421795  541317 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:15:26.423104  541317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:26.423126  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:15:26.423189  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.426124  541317 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:15:26.426155  541317 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:15:26.427913  541317 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:15:26.427942  541317 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:15:26.428006  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.428194  541317 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:15:26.428326  541317 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:15:26.428385  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.445068  541317 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:15:26.445504  541317 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:15:26.446346  541317 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:15:26.447385  541317 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:15:26.447454  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.449020  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:15:26.450364  541317 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:15:26.450477  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:15:26.451766  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:15:26.451989  541317 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:15:26.452005  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:15:26.452050  541317 addons.go:234] Setting addon default-storageclass=true in "addons-305811"
	I0927 00:15:26.452069  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.452091  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.452632  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:26.455109  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:15:26.458331  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:15:26.459716  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:15:26.460825  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:15:26.466370  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:15:26.468107  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:15:26.468139  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:15:26.468211  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.476252  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.480628  541317 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:15:26.481855  541317 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:15:26.483099  541317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:26.483119  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:15:26.483122  541317 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 00:15:26.483168  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.485277  541317 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 00:15:26.485905  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:26.487565  541317 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 00:15:26.489015  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.490552  541317 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:26.490574  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 00:15:26.490627  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.490853  541317 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:15:26.492636  541317 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:15:26.492731  541317 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:26.492756  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:15:26.492815  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.493635  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:15:26.493661  541317 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:15:26.493714  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.494681  541317 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:15:26.496279  541317 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:26.496300  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:15:26.496360  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.496527  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.502216  541317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:15:26.506257  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.506349  541317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:26.507982  541317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:26.509663  541317 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:26.509688  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:15:26.509755  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.509998  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.525223  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.526896  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.528256  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.538665  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.538660  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.543307  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.549531  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.556451  541317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:26.556479  541317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:15:26.556541  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:26.563342  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:26.576434  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	W0927 00:15:26.624040  541317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:15:26.624084  541317 retry.go:31] will retry after 358.209753ms: ssh: handshake failed: EOF
	I0927 00:15:26.927333  541317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:26.927463  541317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:15:26.928966  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:26.938366  541317 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:15:26.938469  541317 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:15:26.945395  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:15:26.945425  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:15:27.022733  541317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:15:27.022769  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:15:27.123739  541317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:15:27.123776  541317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:15:27.130643  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:27.139459  541317 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:15:27.139546  541317 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:15:27.141259  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:27.142502  541317 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:15:27.142554  541317 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:15:27.224562  541317 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:15:27.224655  541317 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:15:27.427298  541317 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:15:27.427394  541317 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:15:27.431709  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:27.436468  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:15:27.436565  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:15:27.439278  541317 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:15:27.439369  541317 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:15:27.441399  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:27.441775  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:27.522633  541317 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:27.522667  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:15:27.622386  541317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:15:27.622499  541317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:15:27.629583  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:27.643205  541317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:15:27.643291  541317 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:15:27.736913  541317 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:15:27.737004  541317 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:15:27.838159  541317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:15:27.838248  541317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:15:27.838894  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:27.839863  541317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:15:27.839924  541317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:15:27.921254  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:15:27.921310  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:15:27.934422  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:28.327083  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:15:28.327110  541317 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:15:28.425000  541317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:15:28.425034  541317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:15:28.433076  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:15:28.433112  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:15:28.440341  541317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:28.440423  541317 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:15:28.534338  541317 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:28.534418  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:15:28.637479  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:28.640269  541317 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:15:28.640361  541317 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:15:29.041482  541317 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:29.041515  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:15:29.123462  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:29.125714  541317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:15:29.125804  541317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:15:29.623915  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:15:29.624012  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:15:29.630769  541317 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:29.630800  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:15:29.638836  541317 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.71145474s)
	I0927 00:15:29.639871  541317 node_ready.go:35] waiting up to 6m0s for node "addons-305811" to be "Ready" ...
	I0927 00:15:29.640154  541317 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.712668815s)
	I0927 00:15:29.640180  541317 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:15:29.641314  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.712319996s)
	I0927 00:15:29.643890  541317 node_ready.go:49] node "addons-305811" has status "Ready":"True"
	I0927 00:15:29.643917  541317 node_ready.go:38] duration metric: took 4.002354ms for node "addons-305811" to be "Ready" ...
	I0927 00:15:29.643928  541317 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:29.725955  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:29.736885  541317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6xq2h" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:30.030866  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:15:30.030904  541317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:15:30.037873  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:30.141850  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:15:30.141890  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:15:30.221945  541317 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-305811" context rescaled to 1 replicas
	I0927 00:15:30.823698  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:15:30.823731  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:15:31.433043  541317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:31.433078  541317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:15:31.743859  541317 pod_ready.go:103] pod "coredns-7c65d6cfc9-6xq2h" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:31.829124  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:31.922493  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.791806475s)
	I0927 00:15:31.922899  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.781570209s)
	I0927 00:15:32.839150  541317 pod_ready.go:93] pod "coredns-7c65d6cfc9-6xq2h" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:32.839246  541317 pod_ready.go:82] duration metric: took 3.10232007s for pod "coredns-7c65d6cfc9-6xq2h" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.839274  541317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xsg85" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.928304  541317 pod_ready.go:93] pod "coredns-7c65d6cfc9-xsg85" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:32.928634  541317 pod_ready.go:82] duration metric: took 89.316308ms for pod "coredns-7c65d6cfc9-xsg85" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.928696  541317 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.934708  541317 pod_ready.go:93] pod "etcd-addons-305811" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:32.934737  541317 pod_ready.go:82] duration metric: took 6.017771ms for pod "etcd-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.934751  541317 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.941401  541317 pod_ready.go:93] pod "kube-apiserver-addons-305811" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:32.941436  541317 pod_ready.go:82] duration metric: took 6.676485ms for pod "kube-apiserver-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:32.941450  541317 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:33.528786  541317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:15:33.528968  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:33.554321  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:34.028370  541317 pod_ready.go:93] pod "kube-controller-manager-addons-305811" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:34.028400  541317 pod_ready.go:82] duration metric: took 1.086940969s for pod "kube-controller-manager-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.028413  541317 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mndlk" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.037828  541317 pod_ready.go:93] pod "kube-proxy-mndlk" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:34.037924  541317 pod_ready.go:82] duration metric: took 9.498559ms for pod "kube-proxy-mndlk" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.037952  541317 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.128572  541317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:15:34.346415  541317 pod_ready.go:93] pod "kube-scheduler-addons-305811" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:34.346439  541317 pod_ready.go:82] duration metric: took 308.469097ms for pod "kube-scheduler-addons-305811" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.346450  541317 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:34.541430  541317 addons.go:234] Setting addon gcp-auth=true in "addons-305811"
	I0927 00:15:34.541566  541317 host.go:66] Checking if "addons-305811" exists ...
	I0927 00:15:34.542341  541317 cli_runner.go:164] Run: docker container inspect addons-305811 --format={{.State.Status}}
	I0927 00:15:34.561507  541317 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:15:34.561567  541317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-305811
	I0927 00:15:34.578589  541317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/addons-305811/id_rsa Username:docker}
	I0927 00:15:36.436075  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:38.938193  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:39.539509  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.10770372s)
	I0927 00:15:39.539592  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.097757454s)
	I0927 00:15:39.539641  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.098158903s)
	I0927 00:15:39.539797  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.910099967s)
	I0927 00:15:39.539821  541317 addons.go:475] Verifying addon ingress=true in "addons-305811"
	I0927 00:15:39.540057  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.701104313s)
	I0927 00:15:39.540351  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.605894527s)
	I0927 00:15:39.540371  541317 addons.go:475] Verifying addon registry=true in "addons-305811"
	I0927 00:15:39.541010  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.815007256s)
	W0927 00:15:39.541049  541317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:15:39.541069  541317 retry.go:31] will retry after 162.421801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:15:39.541154  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.503246255s)
	I0927 00:15:39.541421  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.417408092s)
	I0927 00:15:39.541561  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.903233359s)
	I0927 00:15:39.541604  541317 addons.go:475] Verifying addon metrics-server=true in "addons-305811"
	I0927 00:15:39.542851  541317 out.go:177] * Verifying registry addon...
	I0927 00:15:39.542944  541317 out.go:177] * Verifying ingress addon...
	I0927 00:15:39.543608  541317 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-305811 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:15:39.545630  541317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:15:39.546885  541317 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:15:39.628994  541317 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:15:39.629079  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:39.631046  541317 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:15:39.631076  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:39.704190  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:40.125668  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:40.129770  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:40.341081  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.511773272s)
	I0927 00:15:40.341129  541317 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-305811"
	I0927 00:15:40.341232  541317 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.779692297s)
	I0927 00:15:40.344281  541317 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:15:40.344286  541317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:40.345558  541317 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:15:40.346658  541317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:15:40.346696  541317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:15:40.346715  541317 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:15:40.430872  541317 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:15:40.430904  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:40.528969  541317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:15:40.528995  541317 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:15:40.550160  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:40.553106  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:40.624322  541317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:15:40.624356  541317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:15:40.724415  541317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:15:40.926241  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:41.126269  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:41.126477  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:41.353149  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:41.423545  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:41.551072  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:41.651880  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:41.852838  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:42.054545  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:42.055193  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:42.246366  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542094165s)
	I0927 00:15:42.246470  541317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.522011267s)
	I0927 00:15:42.248818  541317 addons.go:475] Verifying addon gcp-auth=true in "addons-305811"
	I0927 00:15:42.250641  541317 out.go:177] * Verifying gcp-auth addon...
	I0927 00:15:42.252760  541317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:15:42.255615  541317 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:15:42.357819  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:42.549774  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:42.551316  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:42.851321  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:43.050271  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:43.050617  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:43.351212  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:43.550046  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:43.550446  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:43.850387  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:43.851694  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:44.052551  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:44.052971  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:44.350393  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:44.549898  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:44.551626  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:44.851095  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:45.054148  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:45.054604  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:45.359133  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:45.549897  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:45.550355  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:45.851546  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:45.852963  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:46.049238  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:46.050318  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:46.351621  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:46.549538  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:46.550585  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:46.859700  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:47.049591  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:47.050572  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:47.351569  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:47.549911  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:47.550411  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:47.851364  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:48.049723  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:48.050187  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:48.351305  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:48.352889  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:48.550667  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:48.551010  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:48.851616  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:49.048732  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:49.050668  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:49.357676  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:49.549741  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:49.550148  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:49.851348  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:50.049317  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:50.051097  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:50.351993  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:50.353425  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:50.550108  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:50.550758  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:50.851221  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:51.050110  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:51.051266  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:51.350877  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:51.550726  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:51.552114  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:51.851592  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:52.049777  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:52.051136  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:52.351018  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:52.551032  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:52.551274  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:52.851570  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:52.853226  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:53.049810  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:53.050705  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:53.351064  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:53.550570  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:53.551153  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:53.851494  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:54.050151  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:54.050580  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:54.351167  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:54.550676  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:54.550899  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:54.851154  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:55.049884  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:55.050503  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:55.351088  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:55.352059  541317 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:55.550077  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:55.550420  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:55.850779  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:56.051589  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:56.051885  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:56.351256  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:56.352258  541317 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:56.352282  541317 pod_ready.go:82] duration metric: took 22.005824701s for pod "nvidia-device-plugin-daemonset-ffk56" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:56.352289  541317 pod_ready.go:39] duration metric: took 26.708349147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:56.352314  541317 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:15:56.352372  541317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:56.368849  541317 api_server.go:72] duration metric: took 29.992618305s to wait for apiserver process to appear ...
	I0927 00:15:56.368878  541317 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:15:56.368905  541317 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:15:56.373196  541317 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:15:56.374129  541317 api_server.go:141] control plane version: v1.31.1
	I0927 00:15:56.374155  541317 api_server.go:131] duration metric: took 5.268541ms to wait for apiserver health ...
	I0927 00:15:56.374166  541317 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:15:56.382229  541317 system_pods.go:59] 17 kube-system pods found
	I0927 00:15:56.382289  541317 system_pods.go:61] "coredns-7c65d6cfc9-xsg85" [cc6ecde0-a1f3-4de9-9a9e-ab49546f6d47] Running
	I0927 00:15:56.382300  541317 system_pods.go:61] "csi-hostpath-attacher-0" [6aceffc9-ae54-4f9c-9ab6-88c45b6ed352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:15:56.382306  541317 system_pods.go:61] "csi-hostpath-resizer-0" [95b0b782-1860-4f43-8264-b11b363f90af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:15:56.382314  541317 system_pods.go:61] "csi-hostpathplugin-tnmlb" [f5036697-f552-4527-a157-9db2dee8910d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:15:56.382321  541317 system_pods.go:61] "etcd-addons-305811" [401e4e74-6446-43da-b179-e1423e5391c2] Running
	I0927 00:15:56.382326  541317 system_pods.go:61] "kube-apiserver-addons-305811" [47eeb04c-2424-43b1-9827-91461fd6a1f6] Running
	I0927 00:15:56.382332  541317 system_pods.go:61] "kube-controller-manager-addons-305811" [ff28d494-fc4c-4dcd-8455-44b40749ad8a] Running
	I0927 00:15:56.382341  541317 system_pods.go:61] "kube-ingress-dns-minikube" [0e262871-23c0-43f1-b381-9bf4b6bcca66] Running
	I0927 00:15:56.382351  541317 system_pods.go:61] "kube-proxy-mndlk" [8e92c803-4d49-4816-9954-31ef0780ebd9] Running
	I0927 00:15:56.382357  541317 system_pods.go:61] "kube-scheduler-addons-305811" [9fc20ba2-7c28-4d58-9fe2-0942dc1f387d] Running
	I0927 00:15:56.382367  541317 system_pods.go:61] "metrics-server-84c5f94fbc-cczph" [1150d697-b11c-4f84-9466-d293436bf484] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:15:56.382373  541317 system_pods.go:61] "nvidia-device-plugin-daemonset-ffk56" [b40c20e3-9fd2-41f6-9116-036b5138f4d1] Running
	I0927 00:15:56.382379  541317 system_pods.go:61] "registry-66c9cd494c-t6q8h" [4342257a-a438-4180-ab98-bcb513d0521a] Running
	I0927 00:15:56.382391  541317 system_pods.go:61] "registry-proxy-ljn4r" [0fd59917-12f5-4b55-b4ed-fb31a0b82ca1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:15:56.382400  541317 system_pods.go:61] "snapshot-controller-56fcc65765-5x7m6" [9c5c71ea-d826-4aeb-a56d-116204ffb507] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:56.382410  541317 system_pods.go:61] "snapshot-controller-56fcc65765-qth4p" [82ff93bb-6e90-4ad1-8dc5-995f83e68be2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:56.382418  541317 system_pods.go:61] "storage-provisioner" [d9c69eee-5c0d-401c-bfc6-c1817614b7a0] Running
	I0927 00:15:56.382428  541317 system_pods.go:74] duration metric: took 8.253484ms to wait for pod list to return data ...
	I0927 00:15:56.382440  541317 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:15:56.385429  541317 default_sa.go:45] found service account: "default"
	I0927 00:15:56.385456  541317 default_sa.go:55] duration metric: took 3.005332ms for default service account to be created ...
	I0927 00:15:56.385479  541317 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:15:56.394501  541317 system_pods.go:86] 17 kube-system pods found
	I0927 00:15:56.394533  541317 system_pods.go:89] "coredns-7c65d6cfc9-xsg85" [cc6ecde0-a1f3-4de9-9a9e-ab49546f6d47] Running
	I0927 00:15:56.394544  541317 system_pods.go:89] "csi-hostpath-attacher-0" [6aceffc9-ae54-4f9c-9ab6-88c45b6ed352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:15:56.394550  541317 system_pods.go:89] "csi-hostpath-resizer-0" [95b0b782-1860-4f43-8264-b11b363f90af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:15:56.394558  541317 system_pods.go:89] "csi-hostpathplugin-tnmlb" [f5036697-f552-4527-a157-9db2dee8910d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:15:56.394565  541317 system_pods.go:89] "etcd-addons-305811" [401e4e74-6446-43da-b179-e1423e5391c2] Running
	I0927 00:15:56.394571  541317 system_pods.go:89] "kube-apiserver-addons-305811" [47eeb04c-2424-43b1-9827-91461fd6a1f6] Running
	I0927 00:15:56.394577  541317 system_pods.go:89] "kube-controller-manager-addons-305811" [ff28d494-fc4c-4dcd-8455-44b40749ad8a] Running
	I0927 00:15:56.394587  541317 system_pods.go:89] "kube-ingress-dns-minikube" [0e262871-23c0-43f1-b381-9bf4b6bcca66] Running
	I0927 00:15:56.394593  541317 system_pods.go:89] "kube-proxy-mndlk" [8e92c803-4d49-4816-9954-31ef0780ebd9] Running
	I0927 00:15:56.394608  541317 system_pods.go:89] "kube-scheduler-addons-305811" [9fc20ba2-7c28-4d58-9fe2-0942dc1f387d] Running
	I0927 00:15:56.394616  541317 system_pods.go:89] "metrics-server-84c5f94fbc-cczph" [1150d697-b11c-4f84-9466-d293436bf484] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:15:56.394623  541317 system_pods.go:89] "nvidia-device-plugin-daemonset-ffk56" [b40c20e3-9fd2-41f6-9116-036b5138f4d1] Running
	I0927 00:15:56.394629  541317 system_pods.go:89] "registry-66c9cd494c-t6q8h" [4342257a-a438-4180-ab98-bcb513d0521a] Running
	I0927 00:15:56.394637  541317 system_pods.go:89] "registry-proxy-ljn4r" [0fd59917-12f5-4b55-b4ed-fb31a0b82ca1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:15:56.394652  541317 system_pods.go:89] "snapshot-controller-56fcc65765-5x7m6" [9c5c71ea-d826-4aeb-a56d-116204ffb507] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:56.394660  541317 system_pods.go:89] "snapshot-controller-56fcc65765-qth4p" [82ff93bb-6e90-4ad1-8dc5-995f83e68be2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:56.394669  541317 system_pods.go:89] "storage-provisioner" [d9c69eee-5c0d-401c-bfc6-c1817614b7a0] Running
	I0927 00:15:56.394679  541317 system_pods.go:126] duration metric: took 9.191743ms to wait for k8s-apps to be running ...
	I0927 00:15:56.394693  541317 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:15:56.394748  541317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:15:56.426236  541317 system_svc.go:56] duration metric: took 31.532029ms WaitForService to wait for kubelet
	I0927 00:15:56.426270  541317 kubeadm.go:582] duration metric: took 30.050043226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:56.426294  541317 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:15:56.429791  541317 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0927 00:15:56.429823  541317 node_conditions.go:123] node cpu capacity is 8
	I0927 00:15:56.429892  541317 node_conditions.go:105] duration metric: took 3.540814ms to run NodePressure ...
	I0927 00:15:56.429931  541317 start.go:241] waiting for startup goroutines ...
	I0927 00:15:56.550302  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:56.550780  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:56.857841  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:57.049519  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:57.050122  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:57.359050  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:57.550910  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:57.551248  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:57.852100  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:58.049850  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:58.050673  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:58.351791  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:58.551141  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:58.551181  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:58.851499  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:59.050449  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:59.050827  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:59.351645  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:59.593065  541317 kapi.go:107] duration metric: took 20.047430433s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:15:59.593109  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:59.852028  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:00.052782  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:00.352254  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:00.551100  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:00.852258  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:01.052336  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.426994  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:01.551929  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.851884  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:02.051028  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:02.351331  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:02.551347  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:02.851346  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.052474  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.352656  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.551592  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.851262  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.050419  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.351400  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.553981  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.851592  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.052188  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:05.351723  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.551617  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:05.850947  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.065396  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:06.352151  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.552018  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:06.851228  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.051641  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.358656  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.552022  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.851793  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.051820  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.350955  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.551314  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.851614  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.051203  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.351446  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.622404  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.859220  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.052171  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.352478  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.551605  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.858251  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.051346  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.351478  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.551946  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.852084  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.051321  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.358449  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.551754  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.858388  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.051484  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.358459  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.551830  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.859803  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.053248  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.351485  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.552010  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.866063  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.051809  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:15.351754  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.551901  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:15.851697  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.051445  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.352315  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.551757  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.875664  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.052006  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.352131  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.552334  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.851591  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.050959  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:18.352439  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.551057  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:18.851557  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.051292  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.351099  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.551689  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.887583  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.052070  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.351970  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.552266  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.852172  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.052853  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:21.351964  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.552308  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:21.851837  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.051822  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.436797  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.642611  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.851304  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.051595  541317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.358465  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.551651  541317 kapi.go:107] duration metric: took 44.004764673s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:16:23.850553  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.360002  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.851934  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.351347  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.851174  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.351880  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.850956  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.351925  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.858205  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.351837  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.850718  541317 kapi.go:107] duration metric: took 48.504057382s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:17:04.756444  541317 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:17:04.756469  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.256743  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.755687  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.256685  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.756692  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.255675  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.757062  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.255929  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.756058  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.255888  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.756252  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.256508  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.756537  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.256528  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.756649  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.257501  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.756771  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.256303  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.756181  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.256356  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.756450  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.256873  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.755966  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.256395  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.756481  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.256751  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.758194  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.256854  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.756540  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.256314  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.756765  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.256080  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.756159  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.256466  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.756130  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.256702  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.756282  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.256869  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.755913  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.256332  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.756531  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.256243  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.756049  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.256407  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.755954  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.256065  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.756828  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.256378  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.756257  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.256384  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.756658  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.256350  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.756085  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.256429  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.756593  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.256146  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.756801  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.256249  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.756534  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.256973  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.756055  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.256326  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.756302  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.256869  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.755966  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.255678  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.757306  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.256515  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.756424  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.256462  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.756901  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.256432  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.756256  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.256118  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.756587  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.258352  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.757038  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.255999  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.755984  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.256112  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.755833  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.256022  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.756007  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.256503  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.756619  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.256397  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.756662  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.256410  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.756581  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.256649  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.756491  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.256647  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.755691  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.255607  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.756750  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.256419  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.756914  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.255979  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.755919  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.256451  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.756745  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.256170  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.755710  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.255933  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.756213  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.256336  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.756441  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.256082  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.756066  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.256100  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.756139  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.256772  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.755427  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.255938  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.756445  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.256569  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.756703  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.255867  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.755661  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.257160  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.755946  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.256308  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.756552  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.256448  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.755847  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.255653  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.756649  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.256074  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.756366  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.256512  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.756627  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.257335  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.757872  541317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:11.256159  541317 kapi.go:107] duration metric: took 2m29.00339634s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:18:11.257818  541317 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-305811 cluster.
	I0927 00:18:11.258999  541317 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:18:11.260308  541317 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:18:11.261611  541317 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, storage-provisioner-rancher, volcano, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0927 00:18:11.262833  541317 addons.go:510] duration metric: took 2m44.886527354s for enable addons: enabled=[cloud-spanner storage-provisioner storage-provisioner-rancher volcano ingress-dns nvidia-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0927 00:18:11.262883  541317 start.go:246] waiting for cluster config update ...
	I0927 00:18:11.262910  541317 start.go:255] writing updated cluster config ...
	I0927 00:18:11.263199  541317 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:11.313746  541317 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:11.315484  541317 out.go:177] * Done! kubectl is now configured to use "addons-305811" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 00:27:43 addons-305811 dockerd[1337]: time="2024-09-27T00:27:43.091879406Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=fc0e86eb9fa03c1b traceID=1fae840ba99d2c33206fed3b04fb8455
	Sep 27 00:27:43 addons-305811 dockerd[1337]: time="2024-09-27T00:27:43.093528504Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=fc0e86eb9fa03c1b traceID=1fae840ba99d2c33206fed3b04fb8455
	Sep 27 00:27:43 addons-305811 dockerd[1337]: time="2024-09-27T00:27:43.111113862Z" level=info msg="ignoring event" container=38a77f18d7f1d6c653e9d0c83d865da4a1697bc035afb8611f8fee94eaab9b82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:49 addons-305811 dockerd[1337]: time="2024-09-27T00:27:49.784119716Z" level=info msg="ignoring event" container=ae73a35cb16613fd6c2dbe3d5798292b051636b73866ed12a7eb200052aaef8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:49 addons-305811 dockerd[1337]: time="2024-09-27T00:27:49.909908872Z" level=info msg="ignoring event" container=6a2300ae4ab85a062c547a928914220b2ed4bc56fe8f57334539d29fbe31e8ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.422326028Z" level=info msg="ignoring event" container=01568ab3cb1d596b2d4442581ac6597d76bde665af788625a85d6d3fdcf859ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.521866729Z" level=info msg="ignoring event" container=1d2ea970913b031a8351c2a41f24d9243890ffec74763c3153d25cdb526a3d34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.522678160Z" level=info msg="ignoring event" container=a94027cc549285c8c203089d2d40f997128bed6bf0b370f349b3f8f5884533e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.527006279Z" level=info msg="ignoring event" container=88b78ed8122a13d08665d185414bbbc1b9e7ad6df97514e005177ac4b67890a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.533572524Z" level=info msg="ignoring event" container=4baf93c4886c2d274c829a14024eb30fc5bbf24c6e794069c2df1f9d6568d7cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.535973152Z" level=info msg="ignoring event" container=ff7a3c0846dcd783bf4f36851b90e4223bd43a7a99346dc14995351db2b1ebfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.539139272Z" level=info msg="ignoring event" container=87c16eac7466793206cc9d872e9993ac719a92100ca9d16813efed1fdc8fdb23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.543585865Z" level=info msg="ignoring event" container=1f1f89952c42ae165dd9119090a4ccfba5ff9d2f0f4c0bb30c83da4efca8bc5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.830039828Z" level=info msg="ignoring event" container=669383b6f594ab6746e57d7e865bc10db69037774504d8013168680f0fafa4f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.882572242Z" level=info msg="ignoring event" container=fd672b478c1f55ffedbe24b890e7706d688e4bf4bb59ceb480823ce68665380e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:51 addons-305811 dockerd[1337]: time="2024-09-27T00:27:51.933486575Z" level=info msg="ignoring event" container=896cdad0b0561fe9c1bbe019b3281fb2c5aa2d97989f8ae08c9d74b180841c39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:57 addons-305811 dockerd[1337]: time="2024-09-27T00:27:57.860898023Z" level=info msg="ignoring event" container=395c83bd8477d8a5c328205a92c39ae655ee52ae44870b35eb49859d0bdae4c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:57 addons-305811 dockerd[1337]: time="2024-09-27T00:27:57.861194038Z" level=info msg="ignoring event" container=33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:58 addons-305811 dockerd[1337]: time="2024-09-27T00:27:58.049465306Z" level=info msg="ignoring event" container=1074a2e6c4c19f1dbe14c176114ef4004abca8b77cfa638e7683fd09beee0de0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:58 addons-305811 dockerd[1337]: time="2024-09-27T00:27:58.076089259Z" level=info msg="ignoring event" container=5e484d96b85f41f4ce46daaa2ab590bae95a0b128b6f216da843f04c8463f7ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:02 addons-305811 dockerd[1337]: time="2024-09-27T00:28:02.491582491Z" level=info msg="ignoring event" container=d73a19eb9dd98a5fef9046caf899c7aa43323554f8b22c0c9d5e1932799d3a42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:02 addons-305811 dockerd[1337]: time="2024-09-27T00:28:02.968309246Z" level=info msg="ignoring event" container=0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:03 addons-305811 dockerd[1337]: time="2024-09-27T00:28:03.034442235Z" level=info msg="ignoring event" container=7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:03 addons-305811 dockerd[1337]: time="2024-09-27T00:28:03.172336932Z" level=info msg="ignoring event" container=d46051a4d490316f98e939bd9e984bd6866477d7a1277e6629fbfb3f0da69d6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:03 addons-305811 dockerd[1337]: time="2024-09-27T00:28:03.233982021Z" level=info msg="ignoring event" container=1c81b50f85f67369346e887aeaf20c4f860ad68c764a7b374bba52420ce30aef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	79ef6861f328b       ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c          46 seconds ago       Running             headlamp                  0                   c037500368495       headlamp-7b5c95b59d-s946n
	5f904eb6803dc       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                    54 seconds ago       Running             hello-world-app           0                   c0f45c25bd227       hello-world-app-55bf9c44b4-dchgd
	7c9f0ba1fe7fa       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                  About a minute ago   Running             nginx                     0                   8457f14f2d781       nginx
	d6d4c049b150a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb   9 minutes ago        Running             gcp-auth                  0                   cbf96edd347d3       gcp-auth-89d5ffd79-64cjx
	5b2808a0c4715       6e38f40d628db                                                                                                  12 minutes ago       Running             storage-provisioner       0                   f982b4331d592       storage-provisioner
	18e57aef5df7f       c69fa2e9cbf5f                                                                                                  12 minutes ago       Running             coredns                   0                   d6b2e41827736       coredns-7c65d6cfc9-xsg85
	4d267a6638bc2       60c005f310ff3                                                                                                  12 minutes ago       Running             kube-proxy                0                   a199f77fe3b5f       kube-proxy-mndlk
	a0ef58e2e4518       2e96e5913fc06                                                                                                  12 minutes ago       Running             etcd                      0                   18d0e7f609bc5       etcd-addons-305811
	08ecd2eaa1900       9aa1fad941575                                                                                                  12 minutes ago       Running             kube-scheduler            0                   49b345210a871       kube-scheduler-addons-305811
	df3206f160abf       6bab7719df100                                                                                                  12 minutes ago       Running             kube-apiserver            0                   8ad8f60cef0e8       kube-apiserver-addons-305811
	d86b3c85aec0a       175ffd71cce3d                                                                                                  12 minutes ago       Running             kube-controller-manager   0                   4f556efc0dabd       kube-controller-manager-addons-305811
	
	
	==> coredns [18e57aef5df7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:45991 - 55774 "HINFO IN 8122863217496893449.7465899274160903413. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009405739s
	[INFO] 10.244.0.25:42425 - 20975 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000334898s
	[INFO] 10.244.0.25:53820 - 27437 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000477989s
	[INFO] 10.244.0.25:54476 - 32860 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197555s
	[INFO] 10.244.0.25:55192 - 49231 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274559s
	[INFO] 10.244.0.25:58333 - 19851 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146174s
	[INFO] 10.244.0.25:55177 - 37399 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123313s
	[INFO] 10.244.0.25:50679 - 42971 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008207578s
	[INFO] 10.244.0.25:48242 - 5181 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.009986008s
	[INFO] 10.244.0.25:49871 - 57474 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007922835s
	[INFO] 10.244.0.25:46356 - 6015 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008171777s
	[INFO] 10.244.0.25:60736 - 35943 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005392845s
	[INFO] 10.244.0.25:50860 - 18804 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006990306s
	[INFO] 10.244.0.25:36579 - 58761 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000810325s
	[INFO] 10.244.0.25:57633 - 26607 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000947794s
	
	
	==> describe nodes <==
	Name:               addons-305811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-305811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-305811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_15_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-305811
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:15:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-305811
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:27:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:27:25 +0000   Fri, 27 Sep 2024 00:15:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:27:25 +0000   Fri, 27 Sep 2024 00:15:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:27:25 +0000   Fri, 27 Sep 2024 00:15:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:27:25 +0000   Fri, 27 Sep 2024 00:15:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-305811
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	System Info:
	  Machine ID:                 b50c9db7800a4cfcb45c554586efdf8d
	  System UUID:                687f7c57-8fda-4353-86ac-f1aff31ee784
	  Boot ID:                    3a8dbeac-cc10-412c-b9bd-194fdac9ca10
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-dchgd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  gcp-auth                    gcp-auth-89d5ffd79-64cjx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-7b5c95b59d-s946n                0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-7c65d6cfc9-xsg85                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-305811                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-305811             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-305811    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mndlk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-305811             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-305811 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-305811 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-305811 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-305811 event: Registered Node addons-305811 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a c7 4d ae 54 7f 08 06
	[  +1.304624] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e 49 c1 d5 4d d2 08 06
	[  +1.531711] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 1d 3a 62 73 c9 08 06
	[  +8.578090] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 08 e7 57 e7 da 08 06
	[  +2.205296] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 8d f1 5f 08 95 08 06
	[  +0.271135] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 96 82 9b 58 f6 08 06
	[  +0.679861] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 05 2f 7f 47 33 08 06
	[ +17.534447] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 39 13 4e ef 37 08 06
	[Sep27 00:17] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 32 e2 e4 24 da 08 06
	[  +0.103342] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 fc 2e 5a cf 2f 08 06
	[Sep27 00:18] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e c6 a6 f0 ee 6e 08 06
	[  +0.000519] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 d9 d1 1c 77 ec 08 06
	[Sep27 00:27] IPv4: martian source 10.244.0.28 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 08 e7 57 e7 da 08 06
	
	
	==> etcd [a0ef58e2e451] <==
	{"level":"info","ts":"2024-09-27T00:15:16.849050Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:16.849186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:16.849223Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:16.849303Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:16.849564Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:16.850190Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-27T00:15:16.850789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:16:22.242841Z","caller":"traceutil/trace.go:171","msg":"trace[782403444] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"113.073459ms","start":"2024-09-27T00:16:22.129750Z","end":"2024-09-27T00:16:22.242824Z","steps":["trace[782403444] 'process raft request'  (duration: 112.954601ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:17:49.066019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.842957ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:17:49.066134Z","caller":"traceutil/trace.go:171","msg":"trace[1285405886] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1413; }","duration":"143.977287ms","start":"2024-09-27T00:17:48.922137Z","end":"2024-09-27T00:17:49.066114Z","steps":["trace[1285405886] 'range keys from in-memory index tree'  (duration: 143.828491ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:17:49.066146Z","caller":"traceutil/trace.go:171","msg":"trace[1916399814] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"145.366275ms","start":"2024-09-27T00:17:48.920762Z","end":"2024-09-27T00:17:49.066128Z","steps":["trace[1916399814] 'process raft request'  (duration: 83.553309ms)","trace[1916399814] 'compare'  (duration: 61.656442ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:17:49.066247Z","caller":"traceutil/trace.go:171","msg":"trace[1825703020] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"140.265902ms","start":"2024-09-27T00:17:48.925969Z","end":"2024-09-27T00:17:49.066235Z","steps":["trace[1825703020] 'process raft request'  (duration: 140.110458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:17:49.066312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.277937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:17:49.066253Z","caller":"traceutil/trace.go:171","msg":"trace[1134986830] linearizableReadLoop","detail":"{readStateIndex:1458; appliedIndex:1456; }","duration":"140.20169ms","start":"2024-09-27T00:17:48.926026Z","end":"2024-09-27T00:17:49.066228Z","steps":["trace[1134986830] 'read index received'  (duration: 78.368294ms)","trace[1134986830] 'applied index is now lower than readState.Index'  (duration: 61.83098ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:17:49.066338Z","caller":"traceutil/trace.go:171","msg":"trace[1257221601] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1415; }","duration":"140.313792ms","start":"2024-09-27T00:17:48.926017Z","end":"2024-09-27T00:17:49.066331Z","steps":["trace[1257221601] 'agreement among raft nodes before linearized reading'  (duration: 140.257474ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:18:32.024413Z","caller":"traceutil/trace.go:171","msg":"trace[918661633] linearizableReadLoop","detail":"{readStateIndex:1596; appliedIndex:1595; }","duration":"132.376598ms","start":"2024-09-27T00:18:31.892010Z","end":"2024-09-27T00:18:32.024387Z","steps":["trace[918661633] 'read index received'  (duration: 73.647393ms)","trace[918661633] 'applied index is now lower than readState.Index'  (duration: 58.728308ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:18:32.024425Z","caller":"traceutil/trace.go:171","msg":"trace[2130996785] transaction","detail":"{read_only:false; response_revision:1539; number_of_response:1; }","duration":"165.204331ms","start":"2024-09-27T00:18:31.859197Z","end":"2024-09-27T00:18:32.024402Z","steps":["trace[2130996785] 'process raft request'  (duration: 106.452332ms)","trace[2130996785] 'compare'  (duration: 58.611531ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:18:32.024573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.144046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:18:32.024600Z","caller":"traceutil/trace.go:171","msg":"trace[181380870] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1539; }","duration":"132.170333ms","start":"2024-09-27T00:18:31.892421Z","end":"2024-09-27T00:18:32.024591Z","steps":["trace[181380870] 'agreement among raft nodes before linearized reading'  (duration: 132.124507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:18:32.024512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.482353ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:18:32.024642Z","caller":"traceutil/trace.go:171","msg":"trace[1899664917] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1539; }","duration":"132.625048ms","start":"2024-09-27T00:18:31.892000Z","end":"2024-09-27T00:18:32.024626Z","steps":["trace[1899664917] 'agreement among raft nodes before linearized reading'  (duration: 132.46346ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:25:16.867104Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1855}
	{"level":"info","ts":"2024-09-27T00:25:16.891984Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1855,"took":"24.300696ms","hash":1653756623,"current-db-size-bytes":8945664,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4960256,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-27T00:25:16.892038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1653756623,"revision":1855,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T00:27:17.802431Z","caller":"traceutil/trace.go:171","msg":"trace[1690332082] transaction","detail":"{read_only:false; response_revision:2773; number_of_response:1; }","duration":"121.96025ms","start":"2024-09-27T00:27:17.680450Z","end":"2024-09-27T00:27:17.802410Z","steps":["trace[1690332082] 'process raft request'  (duration: 60.327824ms)","trace[1690332082] 'compare'  (duration: 61.507791ms)"],"step_count":2}
	
	
	==> gcp-auth [d6d4c049b150] <==
	2024/09/27 00:18:49 Ready to write response ...
	2024/09/27 00:18:50 Ready to marshal response ...
	2024/09/27 00:18:50 Ready to write response ...
	2024/09/27 00:26:58 Ready to marshal response ...
	2024/09/27 00:26:58 Ready to write response ...
	2024/09/27 00:27:02 Ready to marshal response ...
	2024/09/27 00:27:02 Ready to write response ...
	2024/09/27 00:27:04 Ready to marshal response ...
	2024/09/27 00:27:04 Ready to write response ...
	2024/09/27 00:27:04 Ready to marshal response ...
	2024/09/27 00:27:04 Ready to write response ...
	2024/09/27 00:27:07 Ready to marshal response ...
	2024/09/27 00:27:07 Ready to write response ...
	2024/09/27 00:27:12 Ready to marshal response ...
	2024/09/27 00:27:12 Ready to write response ...
	2024/09/27 00:27:14 Ready to marshal response ...
	2024/09/27 00:27:14 Ready to write response ...
	2024/09/27 00:27:14 Ready to marshal response ...
	2024/09/27 00:27:14 Ready to write response ...
	2024/09/27 00:27:14 Ready to marshal response ...
	2024/09/27 00:27:14 Ready to write response ...
	2024/09/27 00:27:19 Ready to marshal response ...
	2024/09/27 00:27:19 Ready to write response ...
	2024/09/27 00:27:41 Ready to marshal response ...
	2024/09/27 00:27:41 Ready to write response ...
	
	
	==> kernel <==
	 00:28:04 up  2:10,  0 users,  load average: 0.66, 0.39, 1.27
	Linux addons-305811 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [df3206f160ab] <==
	W0927 00:18:41.145606       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 00:18:41.527129       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 00:18:41.847525       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0927 00:26:57.741395       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0927 00:26:58.233953       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:26:58.449863       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.89.76"}
	W0927 00:26:59.043444       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:27:02.381418       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0927 00:27:08.031533       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.228.236"}
	I0927 00:27:14.419887       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.123.21"}
	I0927 00:27:26.295849       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0927 00:27:28.407430       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0927 00:27:57.613553       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:57.613607       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:57.626802       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:57.626852       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:57.627419       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:57.627454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:57.643220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:57.643581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:57.737312       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:57.737364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:27:58.627736       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:27:58.738319       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:27:58.747137       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [d86b3c85aec0] <==
	W0927 00:27:41.008394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:41.008441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:51.291744       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0927 00:27:51.343595       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0927 00:27:51.739279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-305811"
	I0927 00:27:57.825571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="11.362µs"
	E0927 00:27:58.629200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:27:58.739669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:27:58.748429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:59.760063       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:59.760112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:00.086639       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:00.086681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:00.170027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:00.170074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:00.530911       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0927 00:28:01.532514       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:01.532558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:02.735053       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:02.735099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:02.933237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.518µs"
	W0927 00:28:03.267504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:03.267549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:04.066983       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:04.067050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4d267a6638bc] <==
	I0927 00:15:26.450696       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:15:26.738989       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:15:26.739074       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:27.029176       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:15:27.029293       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:27.039446       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:27.039900       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:27.039925       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:27.045414       1 config.go:199] "Starting service config controller"
	I0927 00:15:27.045484       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:27.045519       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:27.045523       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:27.045565       1 config.go:328] "Starting node config controller"
	I0927 00:15:27.045585       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:27.146705       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:15:27.146778       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:15:27.227839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [08ecd2eaa190] <==
	W0927 00:15:18.545104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:18.545117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:18.545182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:18.545208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:18.545280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:15:18.545310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:18.545345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:15:18.545382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:18.545559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:15:18.545585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.410291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:15:19.410333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.419761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:15:19.419811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.428290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:19.428335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.491913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:15:19.491975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.529342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:15:19.529388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:19.607677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:15:19.607726       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:15:19.617143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:15:19.617184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:15:22.542920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:27:58 addons-305811 kubelet[2437]: E0927 00:27:58.684577    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca" containerID="33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca"
	Sep 27 00:27:58 addons-305811 kubelet[2437]: I0927 00:27:58.684616    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca"} err="failed to get container status \"33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca\": rpc error: code = Unknown desc = Error response from daemon: No such container: 33d3a52f94f9e6118a0621443bd1afb78a31eb5213ac2c2500cf78329a547aca"
	Sep 27 00:27:59 addons-305811 kubelet[2437]: I0927 00:27:59.043662    2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ff93bb-6e90-4ad1-8dc5-995f83e68be2" path="/var/lib/kubelet/pods/82ff93bb-6e90-4ad1-8dc5-995f83e68be2/volumes"
	Sep 27 00:27:59 addons-305811 kubelet[2437]: I0927 00:27:59.043991    2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5c71ea-d826-4aeb-a56d-116204ffb507" path="/var/lib/kubelet/pods/9c5c71ea-d826-4aeb-a56d-116204ffb507/volumes"
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.658025    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-gcp-creds\") pod \"dab9ce36-3057-4f92-8c1c-f9fa470c11a5\" (UID: \"dab9ce36-3057-4f92-8c1c-f9fa470c11a5\") "
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.658089    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5z7s\" (UniqueName: \"kubernetes.io/projected/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-kube-api-access-w5z7s\") pod \"dab9ce36-3057-4f92-8c1c-f9fa470c11a5\" (UID: \"dab9ce36-3057-4f92-8c1c-f9fa470c11a5\") "
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.658147    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "dab9ce36-3057-4f92-8c1c-f9fa470c11a5" (UID: "dab9ce36-3057-4f92-8c1c-f9fa470c11a5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.659934    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-kube-api-access-w5z7s" (OuterVolumeSpecName: "kube-api-access-w5z7s") pod "dab9ce36-3057-4f92-8c1c-f9fa470c11a5" (UID: "dab9ce36-3057-4f92-8c1c-f9fa470c11a5"). InnerVolumeSpecName "kube-api-access-w5z7s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.759261    2437 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-gcp-creds\") on node \"addons-305811\" DevicePath \"\""
	Sep 27 00:28:02 addons-305811 kubelet[2437]: I0927 00:28:02.759297    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w5z7s\" (UniqueName: \"kubernetes.io/projected/dab9ce36-3057-4f92-8c1c-f9fa470c11a5-kube-api-access-w5z7s\") on node \"addons-305811\" DevicePath \"\""
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.049068    2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab9ce36-3057-4f92-8c1c-f9fa470c11a5" path="/var/lib/kubelet/pods/dab9ce36-3057-4f92-8c1c-f9fa470c11a5/volumes"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.362911    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdh4r\" (UniqueName: \"kubernetes.io/projected/4342257a-a438-4180-ab98-bcb513d0521a-kube-api-access-tdh4r\") pod \"4342257a-a438-4180-ab98-bcb513d0521a\" (UID: \"4342257a-a438-4180-ab98-bcb513d0521a\") "
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.362976    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6bfc\" (UniqueName: \"kubernetes.io/projected/0fd59917-12f5-4b55-b4ed-fb31a0b82ca1-kube-api-access-v6bfc\") pod \"0fd59917-12f5-4b55-b4ed-fb31a0b82ca1\" (UID: \"0fd59917-12f5-4b55-b4ed-fb31a0b82ca1\") "
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.364928    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd59917-12f5-4b55-b4ed-fb31a0b82ca1-kube-api-access-v6bfc" (OuterVolumeSpecName: "kube-api-access-v6bfc") pod "0fd59917-12f5-4b55-b4ed-fb31a0b82ca1" (UID: "0fd59917-12f5-4b55-b4ed-fb31a0b82ca1"). InnerVolumeSpecName "kube-api-access-v6bfc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.365012    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4342257a-a438-4180-ab98-bcb513d0521a-kube-api-access-tdh4r" (OuterVolumeSpecName: "kube-api-access-tdh4r") pod "4342257a-a438-4180-ab98-bcb513d0521a" (UID: "4342257a-a438-4180-ab98-bcb513d0521a"). InnerVolumeSpecName "kube-api-access-tdh4r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.463759    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v6bfc\" (UniqueName: \"kubernetes.io/projected/0fd59917-12f5-4b55-b4ed-fb31a0b82ca1-kube-api-access-v6bfc\") on node \"addons-305811\" DevicePath \"\""
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.463811    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tdh4r\" (UniqueName: \"kubernetes.io/projected/4342257a-a438-4180-ab98-bcb513d0521a-kube-api-access-tdh4r\") on node \"addons-305811\" DevicePath \"\""
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.721484    2437 scope.go:117] "RemoveContainer" containerID="0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.739133    2437 scope.go:117] "RemoveContainer" containerID="0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: E0927 00:28:03.740115    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23" containerID="0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.740166    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23"} err="failed to get container status \"0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0e1265bac22311a4cfc95dd200420ac264d53f97791ef81154582b8dac4eaf23"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.740248    2437 scope.go:117] "RemoveContainer" containerID="7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.761014    2437 scope.go:117] "RemoveContainer" containerID="7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: E0927 00:28:03.762054    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557" containerID="7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557"
	Sep 27 00:28:03 addons-305811 kubelet[2437]: I0927 00:28:03.762111    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557"} err="failed to get container status \"7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7ac1f7270abb14ba062978ccce74cfce81c4783b19231e4e36f8016eae0cf557"
	
	
	==> storage-provisioner [5b2808a0c471] <==
	I0927 00:15:35.230053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:35.242455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:35.242539       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:35.328280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:35.328639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-305811_59c1314c-8f98-4b2e-8d08-846e56086057!
	I0927 00:15:35.331742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7212221-b7e5-4ab5-9b0a-a78bfd6b7224", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-305811_59c1314c-8f98-4b2e-8d08-846e56086057 became leader
	I0927 00:15:35.429632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-305811_59c1314c-8f98-4b2e-8d08-846e56086057!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-305811 -n addons-305811
helpers_test.go:261: (dbg) Run:  kubectl --context addons-305811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-305811 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-305811 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-305811/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 00:18:49 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfhm9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kfhm9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-305811
	  Normal   Pulling    7m48s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m22s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.54s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.93
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1
21 TestBinaryMirror 0.76
22 TestOffline 79.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 208.49
29 TestAddons/serial/Volcano 38.4
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 20.18
35 TestAddons/parallel/InspektorGadget 10.73
36 TestAddons/parallel/MetricsServer 6.02
38 TestAddons/parallel/CSI 40.05
39 TestAddons/parallel/Headlamp 10.87
40 TestAddons/parallel/CloudSpanner 5.5
41 TestAddons/parallel/LocalPath 50.93
42 TestAddons/parallel/NvidiaDevicePlugin 6.39
43 TestAddons/parallel/Yakd 10.83
44 TestAddons/StoppedEnableDisable 5.89
45 TestCertOptions 26.97
46 TestCertExpiration 230.07
47 TestDockerFlags 28.86
48 TestForceSystemdFlag 28.05
49 TestForceSystemdEnv 28.64
51 TestKVMDriverInstallOrUpdate 1.21
55 TestErrorSpam/setup 23.5
56 TestErrorSpam/start 0.56
57 TestErrorSpam/status 0.84
58 TestErrorSpam/pause 1.14
59 TestErrorSpam/unpause 1.35
60 TestErrorSpam/stop 1.92
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 37
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.33
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.13
72 TestFunctional/serial/CacheCmd/cache/add_local 0.69
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.23
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 39.97
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.97
83 TestFunctional/serial/LogsFileCmd 0.98
84 TestFunctional/serial/InvalidService 4.07
86 TestFunctional/parallel/ConfigCmd 0.43
87 TestFunctional/parallel/DashboardCmd 14.54
88 TestFunctional/parallel/DryRun 0.36
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.91
94 TestFunctional/parallel/ServiceCmdConnect 10.76
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 28.21
98 TestFunctional/parallel/SSHCmd 0.59
99 TestFunctional/parallel/CpCmd 1.6
100 TestFunctional/parallel/MySQL 26.36
101 TestFunctional/parallel/FileSync 0.28
102 TestFunctional/parallel/CertSync 1.63
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
110 TestFunctional/parallel/License 0.21
111 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
114 TestFunctional/parallel/ProfileCmd/profile_list 0.42
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.25
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
119 TestFunctional/parallel/ServiceCmd/DeployApp 13.16
120 TestFunctional/parallel/MountCmd/any-port 7.75
121 TestFunctional/parallel/ServiceCmd/List 0.47
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
130 TestFunctional/parallel/ServiceCmd/Format 0.34
131 TestFunctional/parallel/ServiceCmd/URL 0.36
132 TestFunctional/parallel/DockerEnv/bash 0.87
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.68
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.87
140 TestFunctional/parallel/ImageCommands/Setup 0.6
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.98
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.02
147 TestFunctional/parallel/MountCmd/specific-port 2.14
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 100.27
160 TestMultiControlPlane/serial/DeployApp 4.58
161 TestMultiControlPlane/serial/PingHostFromPods 1.1
162 TestMultiControlPlane/serial/AddWorkerNode 19.96
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 15.37
166 TestMultiControlPlane/serial/StopSecondaryNode 11.43
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
168 TestMultiControlPlane/serial/RestartSecondaryNode 38.57
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 204.81
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.3
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 32.56
174 TestMultiControlPlane/serial/RestartCluster 77.72
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 37.69
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
180 TestImageBuild/serial/Setup 24.85
181 TestImageBuild/serial/NormalBuild 1.3
182 TestImageBuild/serial/BuildWithBuildArg 0.8
183 TestImageBuild/serial/BuildWithDockerIgnore 0.62
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.61
188 TestJSONOutput/start/Command 65.99
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.53
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.43
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.89
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
213 TestKicCustomNetwork/create_custom_network 26.33
214 TestKicCustomNetwork/use_default_bridge_network 23.92
215 TestKicExistingNetwork 25.21
216 TestKicCustomSubnet 23.82
217 TestKicStaticIP 26.66
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 48.38
222 TestMountStart/serial/StartWithMountFirst 6.57
223 TestMountStart/serial/VerifyMountFirst 0.24
224 TestMountStart/serial/StartWithMountSecond 9.29
225 TestMountStart/serial/VerifyMountSecond 0.24
226 TestMountStart/serial/DeleteFirst 1.45
227 TestMountStart/serial/VerifyMountPostDelete 0.24
228 TestMountStart/serial/Stop 1.17
229 TestMountStart/serial/RestartStopped 7.72
230 TestMountStart/serial/VerifyMountPostStop 0.23
233 TestMultiNode/serial/FreshStart2Nodes 60.81
234 TestMultiNode/serial/DeployApp2Nodes 42.35
235 TestMultiNode/serial/PingHostFrom2Pods 0.73
236 TestMultiNode/serial/AddNode 15.62
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.59
239 TestMultiNode/serial/CopyFile 8.7
240 TestMultiNode/serial/StopNode 2.07
241 TestMultiNode/serial/StartAfterStop 9.51
242 TestMultiNode/serial/RestartKeepsNodes 93.6
243 TestMultiNode/serial/DeleteNode 5.18
244 TestMultiNode/serial/StopMultiNode 21.33
245 TestMultiNode/serial/RestartMultiNode 51.7
246 TestMultiNode/serial/ValidateNameConflict 26.79
251 TestPreload 91.65
253 TestScheduledStopUnix 96.77
254 TestSkaffold 100.1
256 TestInsufficientStorage 12.53
257 TestRunningBinaryUpgrade 78.61
259 TestKubernetesUpgrade 345.74
260 TestMissingContainerUpgrade 149.27
261 TestStoppedBinaryUpgrade/Setup 0.49
262 TestStoppedBinaryUpgrade/Upgrade 112.4
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.63
272 TestPause/serial/Start 89.73
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
275 TestNoKubernetes/serial/StartWithK8s 23.41
287 TestNoKubernetes/serial/StartWithStopK8s 7.09
288 TestNoKubernetes/serial/Start 8.93
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
290 TestNoKubernetes/serial/ProfileList 3.46
291 TestNoKubernetes/serial/Stop 1.23
292 TestNoKubernetes/serial/StartNoArgs 6.76
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
294 TestPause/serial/SecondStartNoReconfiguration 31.25
295 TestPause/serial/Pause 0.65
296 TestPause/serial/VerifyStatus 0.33
297 TestPause/serial/Unpause 0.46
298 TestPause/serial/PauseAgain 0.59
299 TestPause/serial/DeletePaused 2.2
300 TestPause/serial/VerifyDeletedResources 16.35
302 TestStartStop/group/old-k8s-version/serial/FirstStart 130.4
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.84
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.25
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.82
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.83
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.82
311 TestStartStop/group/embed-certs/serial/FirstStart 71.95
312 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.78
314 TestStartStop/group/old-k8s-version/serial/Stop 10.77
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
316 TestStartStop/group/old-k8s-version/serial/SecondStart 141.27
317 TestStartStop/group/embed-certs/serial/DeployApp 8.25
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
319 TestStartStop/group/embed-certs/serial/Stop 10.74
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/embed-certs/serial/SecondStart 263.83
323 TestStartStop/group/no-preload/serial/FirstStart 76.51
324 TestStartStop/group/no-preload/serial/DeployApp 8.24
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
328 TestStartStop/group/no-preload/serial/Stop 11.59
329 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
330 TestStartStop/group/old-k8s-version/serial/Pause 2.29
332 TestStartStop/group/newest-cni/serial/FirstStart 27.58
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/no-preload/serial/SecondStart 264.3
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
338 TestStartStop/group/newest-cni/serial/Stop 11.02
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
343 TestStartStop/group/newest-cni/serial/SecondStart 16.67
344 TestNetworkPlugins/group/auto/Start 37.64
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
348 TestStartStop/group/newest-cni/serial/Pause 2.59
349 TestNetworkPlugins/group/calico/Start 56.59
350 TestNetworkPlugins/group/auto/KubeletFlags 0.26
351 TestNetworkPlugins/group/auto/NetCatPod 10.22
352 TestNetworkPlugins/group/auto/DNS 0.13
353 TestNetworkPlugins/group/auto/Localhost 0.11
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/custom-flannel/Start 43.66
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.26
358 TestNetworkPlugins/group/calico/NetCatPod 9.18
359 TestNetworkPlugins/group/calico/DNS 0.18
360 TestNetworkPlugins/group/calico/Localhost 0.13
361 TestNetworkPlugins/group/calico/HairPin 0.11
362 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
364 TestNetworkPlugins/group/false/Start 42.04
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
368 TestStartStop/group/embed-certs/serial/Pause 2.97
369 TestNetworkPlugins/group/kindnet/Start 61.44
370 TestNetworkPlugins/group/custom-flannel/DNS 0.13
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
373 TestNetworkPlugins/group/flannel/Start 46.77
374 TestNetworkPlugins/group/false/KubeletFlags 0.29
375 TestNetworkPlugins/group/false/NetCatPod 9.23
376 TestNetworkPlugins/group/false/DNS 0.2
377 TestNetworkPlugins/group/false/Localhost 0.16
378 TestNetworkPlugins/group/false/HairPin 0.17
379 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/Start 66.92
381 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
382 TestNetworkPlugins/group/kindnet/NetCatPod 9.74
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/kindnet/DNS 0.18
385 TestNetworkPlugins/group/kindnet/Localhost 0.15
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
387 TestNetworkPlugins/group/kindnet/HairPin 0.13
388 TestNetworkPlugins/group/flannel/NetCatPod 8.21
389 TestNetworkPlugins/group/flannel/DNS 0.15
390 TestNetworkPlugins/group/flannel/Localhost 0.13
391 TestNetworkPlugins/group/flannel/HairPin 0.14
392 TestNetworkPlugins/group/bridge/Start 67.49
393 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
394 TestNetworkPlugins/group/kubenet/Start 66.42
395 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
396 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/no-preload/serial/Pause 2.77
398 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
399 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
400 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
401 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
402 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
404 TestNetworkPlugins/group/bridge/NetCatPod 10.18
405 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
406 TestNetworkPlugins/group/kubenet/NetCatPod 10.17
407 TestNetworkPlugins/group/bridge/DNS 0.15
408 TestNetworkPlugins/group/bridge/Localhost 0.13
409 TestNetworkPlugins/group/bridge/HairPin 0.13
410 TestNetworkPlugins/group/kubenet/DNS 0.15
411 TestNetworkPlugins/group/kubenet/Localhost 0.12
412 TestNetworkPlugins/group/kubenet/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-630011 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-630011 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.003381995s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:14:34.998518  540034 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0927 00:14:34.998640  540034 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-630011
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-630011: exit status 85 (63.636791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-630011 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |          |
	|         | -p download-only-630011        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:30.036028  540046 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:30.036357  540046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:30.036369  540046 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:30.036373  540046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:30.036564  540046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	W0927 00:14:30.036688  540046 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-533157/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-533157/.minikube/config/config.json: no such file or directory
	I0927 00:14:30.037253  540046 out.go:352] Setting JSON to true
	I0927 00:14:30.038180  540046 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7013,"bootTime":1727389057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:30.038299  540046 start.go:139] virtualization: kvm guest
	I0927 00:14:30.041114  540046 out.go:97] [download-only-630011] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:14:30.041235  540046 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:30.041271  540046 notify.go:220] Checking for updates...
	I0927 00:14:30.042819  540046 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:30.044399  540046 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:30.045941  540046 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:14:30.047463  540046 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	I0927 00:14:30.048868  540046 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 00:14:30.051160  540046 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:30.051382  540046 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:30.075947  540046 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:30.076059  540046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:30.123915  540046 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 00:14:30.114282131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:30.124030  540046 docker.go:318] overlay module found
	I0927 00:14:30.125951  540046 out.go:97] Using the docker driver based on user configuration
	I0927 00:14:30.125979  540046 start.go:297] selected driver: docker
	I0927 00:14:30.125986  540046 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:30.126097  540046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:30.171020  540046 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 00:14:30.161913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:30.171230  540046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:30.171792  540046 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0927 00:14:30.171944  540046 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:30.174056  540046 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-630011 host does not exist
	  To start a cluster, run: "minikube start -p download-only-630011"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-630011
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281768 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281768 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.92542395s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:14:40.327127  540034 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:14:40.327173  540034 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-533157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281768
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281768: exit status 85 (67.254396ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-630011 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-630011        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-630011        | download-only-630011 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only        | download-only-281768 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-281768        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:35
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:35.442663  540391 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:35.442936  540391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:35.442945  540391 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:35.442950  540391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:35.443138  540391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:14:35.443753  540391 out.go:352] Setting JSON to true
	I0927 00:14:35.444676  540391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7018,"bootTime":1727389057,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:35.444782  540391 start.go:139] virtualization: kvm guest
	I0927 00:14:35.446965  540391 out.go:97] [download-only-281768] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:14:35.447164  540391 notify.go:220] Checking for updates...
	I0927 00:14:35.448791  540391 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:35.450939  540391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:35.452410  540391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:14:35.453933  540391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	I0927 00:14:35.455038  540391 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 00:14:35.457343  540391 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:35.457557  540391 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:35.479609  540391 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:35.479712  540391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:35.525948  540391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-27 00:14:35.516932131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:35.526083  540391 docker.go:318] overlay module found
	I0927 00:14:35.527953  540391 out.go:97] Using the docker driver based on user configuration
	I0927 00:14:35.527987  540391 start.go:297] selected driver: docker
	I0927 00:14:35.527996  540391 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:35.528087  540391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:35.571579  540391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-27 00:14:35.562545812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:14:35.571795  540391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:35.572330  540391 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0927 00:14:35.572522  540391 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:35.574201  540391 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-281768 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-281768
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-851602 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-851602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-851602
--- PASS: TestDownloadOnlyKic (1.00s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:14:42.018998  540034 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-456680 --alsologtostderr --binary-mirror http://127.0.0.1:45515 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-456680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-456680
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (79.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-274624 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-274624 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m17.339211723s)
helpers_test.go:175: Cleaning up "offline-docker-274624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-274624
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-274624: (2.249756033s)
--- PASS: TestOffline (79.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-305811
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-305811: exit status 85 (55.520354ms)

                                                
                                                
-- stdout --
	* Profile "addons-305811" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-305811"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-305811
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-305811: exit status 85 (55.482639ms)

                                                
                                                
-- stdout --
	* Profile "addons-305811" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-305811"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (208.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-305811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-305811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m28.492898655s)
--- PASS: TestAddons/Setup (208.49s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.4s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 11.714409ms
addons_test.go:835: volcano-scheduler stabilized in 11.838224ms
addons_test.go:843: volcano-admission stabilized in 11.898747ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-6vwx7" [401c3914-9ea9-4004-bc5f-9d1817a12658] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003398252s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-cx24m" [2d782873-e3c8-4010-bb4c-afcc89dfc201] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003552436s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-d7zpf" [589967f5-ec6d-4aa8-a963-fd6b998f0cbb] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003613731s
addons_test.go:870: (dbg) Run:  kubectl --context addons-305811 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-305811 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-305811 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e2f50d77-32f0-4a61-9796-8b33ecd2a2a0] Pending
helpers_test.go:344: "test-job-nginx-0" [e2f50d77-32f0-4a61-9796-8b33ecd2a2a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e2f50d77-32f0-4a61-9796-8b33ecd2a2a0] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003653554s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable volcano --alsologtostderr -v=1: (11.073309204s)
--- PASS: TestAddons/serial/Volcano (38.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-305811 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-305811 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-305811 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-305811 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-305811 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [078b914e-e7de-4a18-9ee4-7e1bdc8f92f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [078b914e-e7de-4a18-9ee4-7e1bdc8f92f2] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003192138s
I0927 00:27:07.461149  540034 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-305811 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable ingress-dns --alsologtostderr -v=1: (1.464927444s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable ingress --alsologtostderr -v=1: (8.169800505s)
--- PASS: TestAddons/parallel/Ingress (20.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bfr2s" [eb00d29b-36b2-4faa-8854-37964de36b22] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003969662s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-305811
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-305811: (5.726518724s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.154229ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cczph" [1150d697-b11c-4f84-9466-d293436bf484] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004759018s
addons_test.go:413: (dbg) Run:  kubectl --context addons-305811 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 00:27:17.917708  540034 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:27:17.922454  540034 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:27:17.922485  540034 kapi.go:107] duration metric: took 4.784914ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.796549ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-305811 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-305811 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [55ea9d56-d756-406e-8daa-969532dac861] Pending
helpers_test.go:344: "task-pv-pod" [55ea9d56-d756-406e-8daa-969532dac861] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004277917s
addons_test.go:528: (dbg) Run:  kubectl --context addons-305811 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-305811 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-305811 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-305811 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-305811 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-305811 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-305811 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [acdb6106-788f-4fbf-ac56-42a84553dfe4] Pending
helpers_test.go:344: "task-pv-pod-restore" [acdb6106-788f-4fbf-ac56-42a84553dfe4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [acdb6106-788f-4fbf-ac56-42a84553dfe4] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003821637s
addons_test.go:570: (dbg) Run:  kubectl --context addons-305811 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-305811 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-305811 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.452767708s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-305811 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-s946n" [15e297d7-4bbd-4f40-8fa9-44a9cfcd6e23] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-s946n" [15e297d7-4bbd-4f40-8fa9-44a9cfcd6e23] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003868306s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (10.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-kqlc4" [d79684da-2ecf-47ec-b2cd-214f65729541] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003601773s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-305811
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-305811 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-305811 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c3504e2b-72b5-43c1-a521-fb6ac12fb5b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c3504e2b-72b5-43c1-a521-fb6ac12fb5b5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c3504e2b-72b5-43c1-a521-fb6ac12fb5b5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003712348s
addons_test.go:938: (dbg) Run:  kubectl --context addons-305811 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 ssh "cat /opt/local-path-provisioner/pvc-5e00d761-eca2-4989-8669-0ec284d57222_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-305811 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-305811 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.092534881s)
--- PASS: TestAddons/parallel/LocalPath (50.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ffk56" [b40c20e3-9fd2-41f6-9116-036b5138f4d1] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003784345s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-305811
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xf87p" [4e05e4b5-a4c9-443d-90fc-d06c3dbe559f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00430745s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-305811 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-305811 addons disable yakd --alsologtostderr -v=1: (5.820493116s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-305811
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-305811: (5.640188858s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-305811
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-305811
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-305811
--- PASS: TestAddons/StoppedEnableDisable (5.89s)

                                                
                                    
x
+
TestCertOptions (26.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-078588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0927 01:00:45.853825  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-078588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (24.388487891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-078588 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-078588 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-078588 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-078588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-078588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-078588: (2.010289917s)
--- PASS: TestCertOptions (26.97s)

                                                
                                    
x
+
TestCertExpiration (230.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504299 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504299 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.984174953s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504299 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504299 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (19.875363433s)
helpers_test.go:175: Cleaning up "cert-expiration-504299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-504299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-504299: (2.206424641s)
--- PASS: TestCertExpiration (230.07s)

                                                
                                    
x
+
TestDockerFlags (28.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-032125 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-032125 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.134219772s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-032125 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-032125 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-032125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-032125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-032125: (2.078199737s)
--- PASS: TestDockerFlags (28.86s)

                                                
                                    
x
+
TestForceSystemdFlag (28.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-200306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-200306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.53763593s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-200306 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-200306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-200306
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-200306: (2.209632725s)
--- PASS: TestForceSystemdFlag (28.05s)

                                                
                                    
x
+
TestForceSystemdEnv (28.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-826834 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-826834 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.174761941s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-826834 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-826834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-826834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-826834: (2.151739452s)
--- PASS: TestForceSystemdEnv (28.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0927 00:59:41.518378  540034 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 00:59:41.518517  540034 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0927 00:59:41.556464  540034 install.go:62] docker-machine-driver-kvm2: exit status 1
W0927 00:59:41.556817  540034 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 00:59:41.556891  540034 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1083956233/001/docker-machine-driver-kvm2
I0927 00:59:41.662763  540034 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1083956233/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000791710 gz:0xc000791718 tar:0xc0007916c0 tar.bz2:0xc0007916d0 tar.gz:0xc0007916e0 tar.xz:0xc0007916f0 tar.zst:0xc000791700 tbz2:0xc0007916d0 tgz:0xc0007916e0 txz:0xc0007916f0 tzst:0xc000791700 xz:0xc000791720 zip:0xc000791730 zst:0xc000791728] Getters:map[file:0xc001b806f0 http:0xc000632be0 https:0xc000632c30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 00:59:41.662823  540034 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1083956233/001/docker-machine-driver-kvm2
I0927 00:59:42.222854  540034 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 00:59:42.222944  540034 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0927 00:59:42.253978  540034 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0927 00:59:42.254015  540034 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0927 00:59:42.254081  540034 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 00:59:42.254110  540034 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1083956233/002/docker-machine-driver-kvm2
I0927 00:59:42.277145  540034 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1083956233/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000791710 gz:0xc000791718 tar:0xc0007916c0 tar.bz2:0xc0007916d0 tar.gz:0xc0007916e0 tar.xz:0xc0007916f0 tar.zst:0xc000791700 tbz2:0xc0007916d0 tgz:0xc0007916e0 txz:0xc0007916f0 tzst:0xc000791700 xz:0xc000791720 zip:0xc000791730 zst:0xc000791728] Getters:map[file:0xc001b81ae0 http:0xc00074abe0 https:0xc00074ac30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 00:59:42.277189  540034 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1083956233/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                    
x
+
TestErrorSpam/setup (23.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-058553 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-058553 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-058553 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-058553 --driver=docker  --container-runtime=docker: (23.501076376s)
--- PASS: TestErrorSpam/setup (23.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 pause
--- PASS: TestErrorSpam/pause (1.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 stop: (1.740474294s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058553 --log_dir /tmp/nospam-058553 stop
--- PASS: TestErrorSpam/stop (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-533157/.minikube/files/etc/test/nested/copy/540034/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-860089 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (36.999073151s)
--- PASS: TestFunctional/serial/StartWithProxy (37.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:29:21.203548  540034 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-860089 --alsologtostderr -v=8: (33.327192147s)
functional_test.go:663: soft start took 33.329018273s for "functional-860089" cluster.
I0927 00:29:54.531197  540034 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-860089 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-860089 /tmp/TestFunctionalserialCacheCmdcacheadd_local2806168076/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache add minikube-local-cache-test:functional-860089
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache delete minikube-local-cache-test:functional-860089
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-860089
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (251.448595ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 kubectl -- --context functional-860089 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-860089 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-860089 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.972558177s)
functional_test.go:761: restart took 39.972728579s for "functional-860089" cluster.
I0927 00:30:39.329110  540034 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-860089 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 logs --file /tmp/TestFunctionalserialLogsFileCmd2350688638/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-860089 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-860089
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-860089: exit status 115 (311.486373ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30805 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-860089 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 config get cpus: exit status 14 (59.688446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 config get cpus: exit status 14 (107.601962ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-860089 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-860089 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 594096: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-860089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (158.232676ms)

                                                
                                                
-- stdout --
	* [functional-860089] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:31:01.457891  590997 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:31:01.458030  590997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:31:01.458040  590997 out.go:358] Setting ErrFile to fd 2...
	I0927 00:31:01.458046  590997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:31:01.458252  590997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:31:01.458847  590997 out.go:352] Setting JSON to false
	I0927 00:31:01.460424  590997 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8004,"bootTime":1727389057,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:31:01.460549  590997 start.go:139] virtualization: kvm guest
	I0927 00:31:01.462919  590997 out.go:177] * [functional-860089] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:31:01.464481  590997 notify.go:220] Checking for updates...
	I0927 00:31:01.464525  590997 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:31:01.465763  590997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:31:01.466994  590997 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:31:01.468321  590997 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	I0927 00:31:01.469722  590997 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:31:01.471003  590997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:31:01.472786  590997 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:31:01.473261  590997 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:31:01.497302  590997 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:31:01.497403  590997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:31:01.556097  590997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 00:31:01.545308457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:31:01.556240  590997 docker.go:318] overlay module found
	I0927 00:31:01.558161  590997 out.go:177] * Using the docker driver based on existing profile
	I0927 00:31:01.559472  590997 start.go:297] selected driver: docker
	I0927 00:31:01.559490  590997 start.go:901] validating driver "docker" against &{Name:functional-860089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-860089 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:31:01.559756  590997 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:31:01.562136  590997 out.go:201] 
	W0927 00:31:01.563259  590997 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:31:01.564372  590997 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-860089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-860089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (150.861166ms)

                                                
                                                
-- stdout --
	* [functional-860089] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:30:56.958908  589439 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:30:56.959036  589439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:30:56.959045  589439 out.go:358] Setting ErrFile to fd 2...
	I0927 00:30:56.959049  589439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:30:56.959331  589439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:30:56.959871  589439 out.go:352] Setting JSON to false
	I0927 00:30:56.961112  589439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8000,"bootTime":1727389057,"procs":384,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:30:56.961225  589439 start.go:139] virtualization: kvm guest
	I0927 00:30:56.963171  589439 out.go:177] * [functional-860089] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0927 00:30:56.964814  589439 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:30:56.964884  589439 notify.go:220] Checking for updates...
	I0927 00:30:56.967259  589439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:30:56.968545  589439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	I0927 00:30:56.969710  589439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	I0927 00:30:56.972417  589439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:30:56.973719  589439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:30:56.975376  589439 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:30:56.976047  589439 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:30:57.002128  589439 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:30:57.002230  589439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:30:57.051571  589439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 00:30:57.042136009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:30:57.051677  589439 docker.go:318] overlay module found
	I0927 00:30:57.053934  589439 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 00:30:57.055347  589439 start.go:297] selected driver: docker
	I0927 00:30:57.055367  589439 start.go:901] validating driver "docker" against &{Name:functional-860089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-860089 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:30:57.055467  589439 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:30:57.057384  589439 out.go:201] 
	W0927 00:30:57.058591  589439 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:30:57.059645  589439 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-860089 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-860089 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2ck2f" [d2b66a0d-ebc5-43a2-a61e-15a8df27ebd1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2ck2f" [d2b66a0d-ebc5-43a2-a61e-15a8df27ebd1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003446534s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31725
functional_test.go:1675: http://192.168.49.2:31725: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2ck2f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31725
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9d672be8-c854-4691-b55f-a3c63b439c4f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004144219s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-860089 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-860089 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-860089 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-860089 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f368648-172b-41d5-8309-3e24299324cd] Pending
helpers_test.go:344: "sp-pod" [7f368648-172b-41d5-8309-3e24299324cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f368648-172b-41d5-8309-3e24299324cd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003790025s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-860089 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-860089 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-860089 delete -f testdata/storage-provisioner/pod.yaml: (1.350174761s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-860089 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [297fe533-9808-4335-8630-2f44f1f4d607] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [297fe533-9808-4335-8630-2f44f1f4d607] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003895327s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-860089 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh -n functional-860089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cp functional-860089:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4235838393/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh -n functional-860089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh -n functional-860089 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-860089 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8z5fm" [a0b60d98-52be-43e5-a586-f065178af5d8] Pending
helpers_test.go:344: "mysql-6cdb49bbb-8z5fm" [a0b60d98-52be-43e5-a586-f065178af5d8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8z5fm" [a0b60d98-52be-43e5-a586-f065178af5d8] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004004976s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;": exit status 1 (160.230592ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:31:30.137698  540034 retry.go:31] will retry after 994.918688ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;": exit status 1 (110.438438ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:31:31.243966  540034 retry.go:31] will retry after 797.343314ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;": exit status 1 (111.250951ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:31:32.153255  540034 retry.go:31] will retry after 1.860515661s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-860089 exec mysql-6cdb49bbb-8z5fm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/540034/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /etc/test/nested/copy/540034/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/540034.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /etc/ssl/certs/540034.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/540034.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /usr/share/ca-certificates/540034.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5400342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /etc/ssl/certs/5400342.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5400342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /usr/share/ca-certificates/5400342.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-860089 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh "sudo systemctl is-active crio": exit status 1 (248.809574ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 588111: os: process already finished
helpers_test.go:502: unable to terminate pid 587745: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "350.17573ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "74.27288ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-860089 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f9a2784a-32af-48cd-9b00-b22b66b6601f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f9a2784a-32af-48cd-9b00-b22b66b6601f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.003484484s
I0927 00:31:00.330747  540034 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "328.504672ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.574887ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-860089 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-860089 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-46kcb" [fb237cd6-5cbc-4a4b-942b-787ed2f2de0e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-46kcb" [fb237cd6-5cbc-4a4b-942b-787ed2f2de0e] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.003517748s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdany-port109963948/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397057065342218" to /tmp/TestFunctionalparallelMountCmdany-port109963948/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397057065342218" to /tmp/TestFunctionalparallelMountCmdany-port109963948/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397057065342218" to /tmp/TestFunctionalparallelMountCmdany-port109963948/001/test-1727397057065342218
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.549504ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:30:57.315273  540034 retry.go:31] will retry after 702.855003ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:30 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:30 test-1727397057065342218
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh cat /mount-9p/test-1727397057065342218
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-860089 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e816665f-4af4-463e-a245-3bf2476c6a63] Pending
helpers_test.go:344: "busybox-mount" [e816665f-4af4-463e-a245-3bf2476c6a63] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e816665f-4af4-463e-a245-3bf2476c6a63] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e816665f-4af4-463e-a245-3bf2476c6a63] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003722468s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-860089 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdany-port109963948/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service list -o json
functional_test.go:1494: Took "510.35553ms" to run "out/minikube-linux-amd64 -p functional-860089 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-860089 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.26.172 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-860089 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32366
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32366
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-860089 docker-env) && out/minikube-linux-amd64 status -p functional-860089"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-860089 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-860089 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-860089
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-860089
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-860089 image ls --format short --alsologtostderr:
I0927 00:31:11.368932  596413 out.go:345] Setting OutFile to fd 1 ...
I0927 00:31:11.369066  596413 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.369077  596413 out.go:358] Setting ErrFile to fd 2...
I0927 00:31:11.369085  596413 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.369350  596413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
I0927 00:31:11.370253  596413 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.370396  596413 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.371016  596413 cli_runner.go:164] Run: docker container inspect functional-860089 --format={{.State.Status}}
I0927 00:31:11.389581  596413 ssh_runner.go:195] Run: systemctl --version
I0927 00:31:11.389649  596413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-860089
I0927 00:31:11.409764  596413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/functional-860089/id_rsa Username:docker}
I0927 00:31:11.497613  596413 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-860089 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kicbase/echo-server               | functional-860089 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-860089 | 24a24fefe9933 | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-860089 image ls --format table --alsologtostderr:
I0927 00:31:11.804054  596508 out.go:345] Setting OutFile to fd 1 ...
I0927 00:31:11.804338  596508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.804354  596508 out.go:358] Setting ErrFile to fd 2...
I0927 00:31:11.804361  596508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.804616  596508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
I0927 00:31:11.805441  596508 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.805605  596508 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.806139  596508 cli_runner.go:164] Run: docker container inspect functional-860089 --format={{.State.Status}}
I0927 00:31:11.824763  596508 ssh_runner.go:195] Run: systemctl --version
I0927 00:31:11.824831  596508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-860089
I0927 00:31:11.846075  596508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/functional-860089/id_rsa Username:docker}
I0927 00:31:11.936981  596508 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-860089 image ls --format json --alsologtostderr:
[{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"24a24fefe99336ef3899565ce5b6cffaee0adc599c34aa4f96be3e9573f0e794","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-860089"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aae
a29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["regi
stry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-860089"],"size":"4940000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-860089 image ls --format json --alsologtostderr:
I0927 00:31:11.596472  596458 out.go:345] Setting OutFile to fd 1 ...
I0927 00:31:11.596575  596458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.596585  596458 out.go:358] Setting ErrFile to fd 2...
I0927 00:31:11.596591  596458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:11.596793  596458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
I0927 00:31:11.597462  596458 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.597585  596458 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:11.598013  596458 cli_runner.go:164] Run: docker container inspect functional-860089 --format={{.State.Status}}
I0927 00:31:11.615187  596458 ssh_runner.go:195] Run: systemctl --version
I0927 00:31:11.615235  596458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-860089
I0927 00:31:11.634086  596458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/functional-860089/id_rsa Username:docker}
I0927 00:31:11.720790  596458 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-860089 image ls --format yaml --alsologtostderr:
- id: 24a24fefe99336ef3899565ce5b6cffaee0adc599c34aa4f96be3e9573f0e794
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-860089
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-860089
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-860089 image ls --format yaml --alsologtostderr:
I0927 00:31:12.020467  596559 out.go:345] Setting OutFile to fd 1 ...
I0927 00:31:12.020839  596559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:12.020851  596559 out.go:358] Setting ErrFile to fd 2...
I0927 00:31:12.020856  596559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:12.021085  596559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
I0927 00:31:12.022001  596559 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:12.022179  596559 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:12.022737  596559 cli_runner.go:164] Run: docker container inspect functional-860089 --format={{.State.Status}}
I0927 00:31:12.043021  596559 ssh_runner.go:195] Run: systemctl --version
I0927 00:31:12.043070  596559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-860089
I0927 00:31:12.064301  596559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/functional-860089/id_rsa Username:docker}
I0927 00:31:12.153502  596559 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh pgrep buildkitd: exit status 1 (264.383733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image build -t localhost/my-image:functional-860089 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-860089 image build -t localhost/my-image:functional-860089 testdata/build --alsologtostderr: (3.379049592s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-860089 image build -t localhost/my-image:functional-860089 testdata/build --alsologtostderr:
I0927 00:31:12.504134  596690 out.go:345] Setting OutFile to fd 1 ...
I0927 00:31:12.504780  596690 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:12.504843  596690 out.go:358] Setting ErrFile to fd 2...
I0927 00:31:12.504865  596690 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:31:12.505376  596690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
I0927 00:31:12.506744  596690 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:12.507295  596690 config.go:182] Loaded profile config "functional-860089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:31:12.507689  596690 cli_runner.go:164] Run: docker container inspect functional-860089 --format={{.State.Status}}
I0927 00:31:12.525967  596690 ssh_runner.go:195] Run: systemctl --version
I0927 00:31:12.526019  596690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-860089
I0927 00:31:12.546986  596690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/functional-860089/id_rsa Username:docker}
I0927 00:31:12.637560  596690 build_images.go:161] Building image from path: /tmp/build.3390954739.tar
I0927 00:31:12.637631  596690 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:31:12.648190  596690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3390954739.tar
I0927 00:31:12.652465  596690 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3390954739.tar: stat -c "%s %y" /var/lib/minikube/build/build.3390954739.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3390954739.tar': No such file or directory
I0927 00:31:12.652497  596690 ssh_runner.go:362] scp /tmp/build.3390954739.tar --> /var/lib/minikube/build/build.3390954739.tar (3072 bytes)
I0927 00:31:12.679980  596690 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3390954739
I0927 00:31:12.722265  596690 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3390954739 -xf /var/lib/minikube/build/build.3390954739.tar
I0927 00:31:12.732236  596690 docker.go:360] Building image: /var/lib/minikube/build/build.3390954739
I0927 00:31:12.732308  596690 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-860089 /var/lib/minikube/build/build.3390954739
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.3s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:5a319e1ba1da88282495f53ea955938ff8a96dd0d0a8272463f3ab52034a40ab done
#8 naming to localhost/my-image:functional-860089 done
#8 DONE 0.0s
I0927 00:31:15.797189  596690 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-860089 /var/lib/minikube/build/build.3390954739: (3.064848983s)
I0927 00:31:15.797286  596690 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3390954739
I0927 00:31:15.807565  596690 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3390954739.tar
I0927 00:31:15.822484  596690 build_images.go:217] Built localhost/my-image:functional-860089 from /tmp/build.3390954739.tar
I0927 00:31:15.822525  596690 build_images.go:133] succeeded building to: functional-860089
I0927 00:31:15.822531  596690 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
2024/09/27 00:31:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-860089
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image load --daemon kicbase/echo-server:functional-860089 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image load --daemon kicbase/echo-server:functional-860089 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-860089
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image load --daemon kicbase/echo-server:functional-860089 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdspecific-port2011797618/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.851625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:31:05.123533  540034 retry.go:31] will retry after 670.951822ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdspecific-port2011797618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh "sudo umount -f /mount-9p": exit status 1 (311.100784ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-860089 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdspecific-port2011797618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image save kicbase/echo-server:functional-860089 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image rm kicbase/echo-server:functional-860089 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T" /mount1: exit status 1 (328.068172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:31:07.282691  540034 retry.go:31] will retry after 370.080539ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-860089 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-860089 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2619059311/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-860089
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-860089 image save --daemon kicbase/echo-server:functional-860089 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-860089
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-860089
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-860089
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-860089
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-780985 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:33:11.331703  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.338144  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.349610  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.371058  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.412612  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.494124  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.655875  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.977457  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:12.619619  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:13.901206  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:16.463188  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-780985 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.610203469s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (100.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-780985 -- rollout status deployment/busybox: (2.565024251s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-5c528 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-99s8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-hwlnc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-5c528 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-99s8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-hwlnc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-5c528 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-99s8b -- nslookup kubernetes.default.svc.cluster.local
E0927 00:33:21.584871  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-hwlnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-5c528 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-5c528 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-99s8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-99s8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-hwlnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-780985 -- exec busybox-7dff88458-hwlnc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-780985 -v=7 --alsologtostderr
E0927 00:33:31.827146  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-780985 -v=7 --alsologtostderr: (19.139376278s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-780985 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp testdata/cp-test.txt ha-780985:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854541359/001/cp-test_ha-780985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985:/home/docker/cp-test.txt ha-780985-m02:/home/docker/cp-test_ha-780985_ha-780985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test_ha-780985_ha-780985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985:/home/docker/cp-test.txt ha-780985-m03:/home/docker/cp-test_ha-780985_ha-780985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test_ha-780985_ha-780985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985:/home/docker/cp-test.txt ha-780985-m04:/home/docker/cp-test_ha-780985_ha-780985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test_ha-780985_ha-780985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp testdata/cp-test.txt ha-780985-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854541359/001/cp-test_ha-780985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m02:/home/docker/cp-test.txt ha-780985:/home/docker/cp-test_ha-780985-m02_ha-780985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test_ha-780985-m02_ha-780985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m02:/home/docker/cp-test.txt ha-780985-m03:/home/docker/cp-test_ha-780985-m02_ha-780985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test_ha-780985-m02_ha-780985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m02:/home/docker/cp-test.txt ha-780985-m04:/home/docker/cp-test_ha-780985-m02_ha-780985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test_ha-780985-m02_ha-780985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp testdata/cp-test.txt ha-780985-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test.txt"
E0927 00:33:52.309494  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854541359/001/cp-test_ha-780985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m03:/home/docker/cp-test.txt ha-780985:/home/docker/cp-test_ha-780985-m03_ha-780985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test_ha-780985-m03_ha-780985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m03:/home/docker/cp-test.txt ha-780985-m02:/home/docker/cp-test_ha-780985-m03_ha-780985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test_ha-780985-m03_ha-780985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m03:/home/docker/cp-test.txt ha-780985-m04:/home/docker/cp-test_ha-780985-m03_ha-780985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test_ha-780985-m03_ha-780985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp testdata/cp-test.txt ha-780985-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854541359/001/cp-test_ha-780985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m04:/home/docker/cp-test.txt ha-780985:/home/docker/cp-test_ha-780985-m04_ha-780985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985 "sudo cat /home/docker/cp-test_ha-780985-m04_ha-780985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m04:/home/docker/cp-test.txt ha-780985-m02:/home/docker/cp-test_ha-780985-m04_ha-780985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m02 "sudo cat /home/docker/cp-test_ha-780985-m04_ha-780985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 cp ha-780985-m04:/home/docker/cp-test.txt ha-780985-m03:/home/docker/cp-test_ha-780985-m04_ha-780985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 ssh -n ha-780985-m03 "sudo cat /home/docker/cp-test_ha-780985-m04_ha-780985-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-780985 node stop m02 -v=7 --alsologtostderr: (10.775302839s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr: exit status 7 (651.721436ms)

                                                
                                                
-- stdout --
	ha-780985
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-780985-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-780985-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-780985-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:34:10.046341  624364 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:34:10.046621  624364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:34:10.046631  624364 out.go:358] Setting ErrFile to fd 2...
	I0927 00:34:10.046635  624364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:34:10.046868  624364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:34:10.047044  624364 out.go:352] Setting JSON to false
	I0927 00:34:10.047074  624364 mustload.go:65] Loading cluster: ha-780985
	I0927 00:34:10.047130  624364 notify.go:220] Checking for updates...
	I0927 00:34:10.047657  624364 config.go:182] Loaded profile config "ha-780985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:34:10.047686  624364 status.go:174] checking status of ha-780985 ...
	I0927 00:34:10.048268  624364 cli_runner.go:164] Run: docker container inspect ha-780985 --format={{.State.Status}}
	I0927 00:34:10.066708  624364 status.go:364] ha-780985 host status = "Running" (err=<nil>)
	I0927 00:34:10.066752  624364 host.go:66] Checking if "ha-780985" exists ...
	I0927 00:34:10.067050  624364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-780985
	I0927 00:34:10.084764  624364 host.go:66] Checking if "ha-780985" exists ...
	I0927 00:34:10.085051  624364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:10.085103  624364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-780985
	I0927 00:34:10.104859  624364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/ha-780985/id_rsa Username:docker}
	I0927 00:34:10.189502  624364 ssh_runner.go:195] Run: systemctl --version
	I0927 00:34:10.193594  624364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:34:10.204561  624364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:34:10.261584  624364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-27 00:34:10.251544554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:34:10.262149  624364 kubeconfig.go:125] found "ha-780985" server: "https://192.168.49.254:8443"
	I0927 00:34:10.262180  624364 api_server.go:166] Checking apiserver status ...
	I0927 00:34:10.262221  624364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:34:10.273346  624364 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2393/cgroup
	I0927 00:34:10.282419  624364 api_server.go:182] apiserver freezer: "12:freezer:/docker/003631e3e3d1870f3078d4df228af7310f66a044e5391d0f375baf60e5299f7d/kubepods/burstable/pod674321c3778b327e0667125370a7371e/6e1ec6af4d96a0ffd69a04c0925a37f3f49532f746da33720775f9de9b8b84a5"
	I0927 00:34:10.282539  624364 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/003631e3e3d1870f3078d4df228af7310f66a044e5391d0f375baf60e5299f7d/kubepods/burstable/pod674321c3778b327e0667125370a7371e/6e1ec6af4d96a0ffd69a04c0925a37f3f49532f746da33720775f9de9b8b84a5/freezer.state
	I0927 00:34:10.291249  624364 api_server.go:204] freezer state: "THAWED"
	I0927 00:34:10.291282  624364 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:34:10.296449  624364 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:34:10.296474  624364 status.go:456] ha-780985 apiserver status = Running (err=<nil>)
	I0927 00:34:10.296500  624364 status.go:176] ha-780985 status: &{Name:ha-780985 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:34:10.296520  624364 status.go:174] checking status of ha-780985-m02 ...
	I0927 00:34:10.296767  624364 cli_runner.go:164] Run: docker container inspect ha-780985-m02 --format={{.State.Status}}
	I0927 00:34:10.314817  624364 status.go:364] ha-780985-m02 host status = "Stopped" (err=<nil>)
	I0927 00:34:10.314846  624364 status.go:377] host is not running, skipping remaining checks
	I0927 00:34:10.314852  624364 status.go:176] ha-780985-m02 status: &{Name:ha-780985-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:34:10.314874  624364 status.go:174] checking status of ha-780985-m03 ...
	I0927 00:34:10.315142  624364 cli_runner.go:164] Run: docker container inspect ha-780985-m03 --format={{.State.Status}}
	I0927 00:34:10.332487  624364 status.go:364] ha-780985-m03 host status = "Running" (err=<nil>)
	I0927 00:34:10.332515  624364 host.go:66] Checking if "ha-780985-m03" exists ...
	I0927 00:34:10.332868  624364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-780985-m03
	I0927 00:34:10.352783  624364 host.go:66] Checking if "ha-780985-m03" exists ...
	I0927 00:34:10.353074  624364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:10.353133  624364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-780985-m03
	I0927 00:34:10.370671  624364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/ha-780985-m03/id_rsa Username:docker}
	I0927 00:34:10.453313  624364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:34:10.464088  624364 kubeconfig.go:125] found "ha-780985" server: "https://192.168.49.254:8443"
	I0927 00:34:10.464116  624364 api_server.go:166] Checking apiserver status ...
	I0927 00:34:10.464152  624364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:34:10.474658  624364 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2294/cgroup
	I0927 00:34:10.484042  624364 api_server.go:182] apiserver freezer: "12:freezer:/docker/bed264fd4000c7d5e27f829b67cd49052256b4920bcb5f034d3212aabe1f3ef8/kubepods/burstable/pod03a27b38174a532d541306c9ad053612/918a731e555fcb97fd9a1d07f5985d61aba913962352c3aa3561139237c48da2"
	I0927 00:34:10.484135  624364 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bed264fd4000c7d5e27f829b67cd49052256b4920bcb5f034d3212aabe1f3ef8/kubepods/burstable/pod03a27b38174a532d541306c9ad053612/918a731e555fcb97fd9a1d07f5985d61aba913962352c3aa3561139237c48da2/freezer.state
	I0927 00:34:10.493766  624364 api_server.go:204] freezer state: "THAWED"
	I0927 00:34:10.493796  624364 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:34:10.498432  624364 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:34:10.498454  624364 status.go:456] ha-780985-m03 apiserver status = Running (err=<nil>)
	I0927 00:34:10.498463  624364 status.go:176] ha-780985-m03 status: &{Name:ha-780985-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:34:10.498479  624364 status.go:174] checking status of ha-780985-m04 ...
	I0927 00:34:10.498705  624364 cli_runner.go:164] Run: docker container inspect ha-780985-m04 --format={{.State.Status}}
	I0927 00:34:10.515918  624364 status.go:364] ha-780985-m04 host status = "Running" (err=<nil>)
	I0927 00:34:10.515947  624364 host.go:66] Checking if "ha-780985-m04" exists ...
	I0927 00:34:10.516188  624364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-780985-m04
	I0927 00:34:10.533704  624364 host.go:66] Checking if "ha-780985-m04" exists ...
	I0927 00:34:10.533960  624364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:10.534003  624364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-780985-m04
	I0927 00:34:10.551932  624364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/ha-780985-m04/id_rsa Username:docker}
	I0927 00:34:10.637476  624364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:34:10.647846  624364 status.go:176] ha-780985-m04 status: &{Name:ha-780985-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 node start m02 -v=7 --alsologtostderr
E0927 00:34:33.271532  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-780985 node start m02 -v=7 --alsologtostderr: (37.611014173s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-780985 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-780985 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-780985 -v=7 --alsologtostderr: (33.559126168s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-780985 --wait=true -v=7 --alsologtostderr
E0927 00:35:45.856421  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:45.862837  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:45.874179  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:45.895577  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:45.937051  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:46.018547  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:46.180233  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:46.501995  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:47.144111  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:48.426284  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:50.988592  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:55.195395  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:35:56.110062  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:36:06.352075  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:36:26.833876  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:07.795350  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:11.331713  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-780985 --wait=true -v=7 --alsologtostderr: (2m51.153376651s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-780985
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-780985 node delete m03 -v=7 --alsologtostderr: (8.559848212s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 stop -v=7 --alsologtostderr
E0927 00:38:29.719500  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:39.038359  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-780985 stop -v=7 --alsologtostderr: (32.450357222s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr: exit status 7 (104.404055ms)

                                                
                                                
-- stdout --
	ha-780985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-780985-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-780985-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:38:57.998063  655021 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:38:57.998310  655021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:38:57.998319  655021 out.go:358] Setting ErrFile to fd 2...
	I0927 00:38:57.998324  655021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:38:57.998530  655021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:38:57.998728  655021 out.go:352] Setting JSON to false
	I0927 00:38:57.998766  655021 mustload.go:65] Loading cluster: ha-780985
	I0927 00:38:57.998912  655021 notify.go:220] Checking for updates...
	I0927 00:38:57.999219  655021 config.go:182] Loaded profile config "ha-780985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:38:57.999242  655021 status.go:174] checking status of ha-780985 ...
	I0927 00:38:57.999732  655021 cli_runner.go:164] Run: docker container inspect ha-780985 --format={{.State.Status}}
	I0927 00:38:58.019609  655021 status.go:364] ha-780985 host status = "Stopped" (err=<nil>)
	I0927 00:38:58.019647  655021 status.go:377] host is not running, skipping remaining checks
	I0927 00:38:58.019658  655021 status.go:176] ha-780985 status: &{Name:ha-780985 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:38:58.019689  655021 status.go:174] checking status of ha-780985-m02 ...
	I0927 00:38:58.019949  655021 cli_runner.go:164] Run: docker container inspect ha-780985-m02 --format={{.State.Status}}
	I0927 00:38:58.036897  655021 status.go:364] ha-780985-m02 host status = "Stopped" (err=<nil>)
	I0927 00:38:58.036919  655021 status.go:377] host is not running, skipping remaining checks
	I0927 00:38:58.036925  655021 status.go:176] ha-780985-m02 status: &{Name:ha-780985-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:38:58.036944  655021 status.go:174] checking status of ha-780985-m04 ...
	I0927 00:38:58.037226  655021 cli_runner.go:164] Run: docker container inspect ha-780985-m04 --format={{.State.Status}}
	I0927 00:38:58.054028  655021 status.go:364] ha-780985-m04 host status = "Stopped" (err=<nil>)
	I0927 00:38:58.054057  655021 status.go:377] host is not running, skipping remaining checks
	I0927 00:38:58.054064  655021 status.go:176] ha-780985-m04 status: &{Name:ha-780985-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-780985 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-780985 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.963983371s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-780985 --control-plane -v=7 --alsologtostderr
E0927 00:40:45.853960  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-780985 --control-plane -v=7 --alsologtostderr: (36.882224257s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-780985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-962919 --driver=docker  --container-runtime=docker
E0927 00:41:13.561795  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-962919 --driver=docker  --container-runtime=docker: (24.850089609s)
--- PASS: TestImageBuild/serial/Setup (24.85s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-962919
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-962919: (1.295562563s)
--- PASS: TestImageBuild/serial/NormalBuild (1.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-962919
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-962919
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-962919
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-472143 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-472143 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m5.988366957s)
--- PASS: TestJSONOutput/start/Command (65.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-472143 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-472143 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-472143 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-472143 --output=json --user=testUser: (10.89160917s)
--- PASS: TestJSONOutput/stop/Command (10.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-466157 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-466157 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.473649ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b37f7b71-bf1e-41f0-bc97-35bb73e2ab3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-466157] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a01550c-b417-4a1c-bde0-b7e2f0bbb73b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"5e042932-6319-4bc5-acc9-3955cecef676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"83e16961-b8c0-4cd7-8301-bbf4451dcaf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig"}}
	{"specversion":"1.0","id":"44bb7f09-9282-48ab-962f-6cc61cb93677","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube"}}
	{"specversion":"1.0","id":"8d488fdf-e0d3-4446-ab90-ecaf77eef46d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"033b91e4-f3c5-452b-9891-3bfd5a384580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f6818a67-4471-4f3c-b29d-c825400640c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-466157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-466157
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-691361 --network=
E0927 00:43:11.331806  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-691361 --network=: (24.348179532s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-691361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-691361
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-691361: (1.963459702s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-620520 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-620520 --network=bridge: (22.078169163s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-620520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-620520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-620520: (1.825572249s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.92s)

                                                
                                    
x
+
TestKicExistingNetwork (25.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 00:43:42.638423  540034 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 00:43:42.655321  540034 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 00:43:42.655424  540034 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 00:43:42.655448  540034 cli_runner.go:164] Run: docker network inspect existing-network
W0927 00:43:42.671869  540034 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 00:43:42.671905  540034 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 00:43:42.671919  540034 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 00:43:42.672037  540034 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 00:43:42.690007  540034 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2e1e15779626 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0e:63:c7:75} reservation:<nil>}
I0927 00:43:42.690522  540034 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c90}
I0927 00:43:42.690559  540034 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 00:43:42.690613  540034 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 00:43:42.753161  540034 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-191902 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-191902 --network=existing-network: (23.153444169s)
helpers_test.go:175: Cleaning up "existing-network-191902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-191902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-191902: (1.905851566s)
I0927 00:44:07.828996  540034 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.21s)

                                                
                                    
x
+
TestKicCustomSubnet (23.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-453577 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-453577 --subnet=192.168.60.0/24: (21.772049555s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-453577 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-453577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-453577
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-453577: (2.034135772s)
--- PASS: TestKicCustomSubnet (23.82s)

                                                
                                    
x
+
TestKicStaticIP (26.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-885002 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-885002 --static-ip=192.168.200.200: (24.562111495s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-885002 ip
helpers_test.go:175: Cleaning up "static-ip-885002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-885002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-885002: (1.979223653s)
--- PASS: TestKicStaticIP (26.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-516834 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-516834 --driver=docker  --container-runtime=docker: (20.819661489s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-530594 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-530594 --driver=docker  --container-runtime=docker: (22.295591449s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-516834
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-530594
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-530594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-530594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-530594: (2.04654625s)
helpers_test.go:175: Cleaning up "first-516834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-516834
E0927 00:45:45.853559  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-516834: (2.089419901s)
--- PASS: TestMinikubeProfile (48.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-451548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-451548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.572129466s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-451548 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-475225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-475225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.290857334s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-475225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-451548 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-451548 --alsologtostderr -v=5: (1.452876538s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-475225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-475225
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-475225: (1.171787034s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-475225
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-475225: (6.722654309s)
--- PASS: TestMountStart/serial/RestartStopped (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-475225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222375 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222375 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m0.369534458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-222375 -- rollout status deployment/busybox: (2.849647401s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:19.445459  540034 retry.go:31] will retry after 1.379593138s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:20.943427  540034 retry.go:31] will retry after 1.894964552s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:22.951782  540034 retry.go:31] will retry after 3.132559764s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:26.196768  540034 retry.go:31] will retry after 3.565861905s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:29.880220  540034 retry.go:31] will retry after 5.419310458s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:35.419413  540034 retry.go:31] will retry after 11.384788768s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:47:46.920599  540034 retry.go:31] will retry after 10.525349766s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-kcdtv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-mnxjw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-kcdtv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-mnxjw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-kcdtv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-mnxjw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-kcdtv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-kcdtv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-mnxjw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222375 -- exec busybox-7dff88458-mnxjw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-222375 -v 3 --alsologtostderr
E0927 00:48:11.331753  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-222375 -v 3 --alsologtostderr: (15.040107337s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-222375 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp testdata/cp-test.txt multinode-222375:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1139595838/001/cp-test_multinode-222375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375:/home/docker/cp-test.txt multinode-222375-m02:/home/docker/cp-test_multinode-222375_multinode-222375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test_multinode-222375_multinode-222375-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375:/home/docker/cp-test.txt multinode-222375-m03:/home/docker/cp-test_multinode-222375_multinode-222375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test_multinode-222375_multinode-222375-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp testdata/cp-test.txt multinode-222375-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1139595838/001/cp-test_multinode-222375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m02:/home/docker/cp-test.txt multinode-222375:/home/docker/cp-test_multinode-222375-m02_multinode-222375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test_multinode-222375-m02_multinode-222375.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m02:/home/docker/cp-test.txt multinode-222375-m03:/home/docker/cp-test_multinode-222375-m02_multinode-222375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test_multinode-222375-m02_multinode-222375-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp testdata/cp-test.txt multinode-222375-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1139595838/001/cp-test_multinode-222375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m03:/home/docker/cp-test.txt multinode-222375:/home/docker/cp-test_multinode-222375-m03_multinode-222375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375 "sudo cat /home/docker/cp-test_multinode-222375-m03_multinode-222375.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 cp multinode-222375-m03:/home/docker/cp-test.txt multinode-222375-m02:/home/docker/cp-test_multinode-222375-m03_multinode-222375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 ssh -n multinode-222375-m02 "sudo cat /home/docker/cp-test_multinode-222375-m03_multinode-222375-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-222375 node stop m03: (1.175563699s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222375 status: exit status 7 (451.89968ms)

                                                
                                                
-- stdout --
	multinode-222375
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-222375-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-222375-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr: exit status 7 (438.689152ms)

                                                
                                                
-- stdout --
	multinode-222375
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-222375-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-222375-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:48:26.059943  741048 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:48:26.060050  741048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:48:26.060058  741048 out.go:358] Setting ErrFile to fd 2...
	I0927 00:48:26.060062  741048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:48:26.060317  741048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:48:26.060495  741048 out.go:352] Setting JSON to false
	I0927 00:48:26.060523  741048 mustload.go:65] Loading cluster: multinode-222375
	I0927 00:48:26.060635  741048 notify.go:220] Checking for updates...
	I0927 00:48:26.060979  741048 config.go:182] Loaded profile config "multinode-222375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:48:26.061003  741048 status.go:174] checking status of multinode-222375 ...
	I0927 00:48:26.061524  741048 cli_runner.go:164] Run: docker container inspect multinode-222375 --format={{.State.Status}}
	I0927 00:48:26.079565  741048 status.go:364] multinode-222375 host status = "Running" (err=<nil>)
	I0927 00:48:26.079603  741048 host.go:66] Checking if "multinode-222375" exists ...
	I0927 00:48:26.079895  741048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-222375
	I0927 00:48:26.097133  741048 host.go:66] Checking if "multinode-222375" exists ...
	I0927 00:48:26.097425  741048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:48:26.097483  741048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-222375
	I0927 00:48:26.114801  741048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/multinode-222375/id_rsa Username:docker}
	I0927 00:48:26.197446  741048 ssh_runner.go:195] Run: systemctl --version
	I0927 00:48:26.201409  741048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:48:26.211684  741048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:48:26.259684  741048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-27 00:48:26.250089281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 00:48:26.260299  741048 kubeconfig.go:125] found "multinode-222375" server: "https://192.168.67.2:8443"
	I0927 00:48:26.260336  741048 api_server.go:166] Checking apiserver status ...
	I0927 00:48:26.260379  741048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:48:26.271195  741048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2365/cgroup
	I0927 00:48:26.279985  741048 api_server.go:182] apiserver freezer: "12:freezer:/docker/26389d3f59a62c67d4817c346783f2b3b42a44b125d20378d05e8c2794e884d6/kubepods/burstable/podbd55e81e1a42cd65332d528499186dc2/eb95dc3539c8e41288af4a09065ab2b8c117ec13b5fc6bfd891f8314c329d0e7"
	I0927 00:48:26.280058  741048 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/26389d3f59a62c67d4817c346783f2b3b42a44b125d20378d05e8c2794e884d6/kubepods/burstable/podbd55e81e1a42cd65332d528499186dc2/eb95dc3539c8e41288af4a09065ab2b8c117ec13b5fc6bfd891f8314c329d0e7/freezer.state
	I0927 00:48:26.288042  741048 api_server.go:204] freezer state: "THAWED"
	I0927 00:48:26.288070  741048 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 00:48:26.291874  741048 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 00:48:26.291899  741048 status.go:456] multinode-222375 apiserver status = Running (err=<nil>)
	I0927 00:48:26.291912  741048 status.go:176] multinode-222375 status: &{Name:multinode-222375 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:48:26.291945  741048 status.go:174] checking status of multinode-222375-m02 ...
	I0927 00:48:26.292177  741048 cli_runner.go:164] Run: docker container inspect multinode-222375-m02 --format={{.State.Status}}
	I0927 00:48:26.309678  741048 status.go:364] multinode-222375-m02 host status = "Running" (err=<nil>)
	I0927 00:48:26.309705  741048 host.go:66] Checking if "multinode-222375-m02" exists ...
	I0927 00:48:26.309952  741048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-222375-m02
	I0927 00:48:26.326808  741048 host.go:66] Checking if "multinode-222375-m02" exists ...
	I0927 00:48:26.327053  741048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:48:26.327105  741048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-222375-m02
	I0927 00:48:26.344169  741048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/19711-533157/.minikube/machines/multinode-222375-m02/id_rsa Username:docker}
	I0927 00:48:26.425261  741048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:48:26.435727  741048 status.go:176] multinode-222375-m02 status: &{Name:multinode-222375-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:48:26.435762  741048 status.go:174] checking status of multinode-222375-m03 ...
	I0927 00:48:26.436047  741048 cli_runner.go:164] Run: docker container inspect multinode-222375-m03 --format={{.State.Status}}
	I0927 00:48:26.452820  741048 status.go:364] multinode-222375-m03 host status = "Stopped" (err=<nil>)
	I0927 00:48:26.452848  741048 status.go:377] host is not running, skipping remaining checks
	I0927 00:48:26.452855  741048 status.go:176] multinode-222375-m03 status: &{Name:multinode-222375-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-222375 node start m03 -v=7 --alsologtostderr: (8.872722761s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222375
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-222375
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-222375: (22.371701939s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222375 --wait=true -v=8 --alsologtostderr
E0927 00:49:34.400262  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222375 --wait=true -v=8 --alsologtostderr: (1m11.133814771s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222375
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-222375 node delete m03: (4.622372423s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-222375 stop: (21.154120002s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222375 status: exit status 7 (92.967164ms)

                                                
                                                
-- stdout --
	multinode-222375
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-222375-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr: exit status 7 (82.079659ms)

                                                
                                                
-- stdout --
	multinode-222375
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-222375-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:50:36.041288  756389 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:50:36.041425  756389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:50:36.041435  756389 out.go:358] Setting ErrFile to fd 2...
	I0927 00:50:36.041439  756389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:50:36.041666  756389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-533157/.minikube/bin
	I0927 00:50:36.041840  756389 out.go:352] Setting JSON to false
	I0927 00:50:36.041866  756389 mustload.go:65] Loading cluster: multinode-222375
	I0927 00:50:36.042017  756389 notify.go:220] Checking for updates...
	I0927 00:50:36.042386  756389 config.go:182] Loaded profile config "multinode-222375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:50:36.042410  756389 status.go:174] checking status of multinode-222375 ...
	I0927 00:50:36.042890  756389 cli_runner.go:164] Run: docker container inspect multinode-222375 --format={{.State.Status}}
	I0927 00:50:36.060219  756389 status.go:364] multinode-222375 host status = "Stopped" (err=<nil>)
	I0927 00:50:36.060264  756389 status.go:377] host is not running, skipping remaining checks
	I0927 00:50:36.060272  756389 status.go:176] multinode-222375 status: &{Name:multinode-222375 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:50:36.060310  756389 status.go:174] checking status of multinode-222375-m02 ...
	I0927 00:50:36.060615  756389 cli_runner.go:164] Run: docker container inspect multinode-222375-m02 --format={{.State.Status}}
	I0927 00:50:36.077705  756389 status.go:364] multinode-222375-m02 host status = "Stopped" (err=<nil>)
	I0927 00:50:36.077751  756389 status.go:377] host is not running, skipping remaining checks
	I0927 00:50:36.077761  756389 status.go:176] multinode-222375-m02 status: &{Name:multinode-222375-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222375 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:50:45.853500  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222375 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (51.152488498s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222375 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222375
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222375-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-222375-m02 --driver=docker  --container-runtime=docker: exit status 14 (66.131806ms)

                                                
                                                
-- stdout --
	* [multinode-222375-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-222375-m02' is duplicated with machine name 'multinode-222375-m02' in profile 'multinode-222375'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222375-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222375-m03 --driver=docker  --container-runtime=docker: (24.409519568s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-222375
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-222375: exit status 80 (264.281362ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-222375 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-222375-m03 already exists in multinode-222375-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-222375-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-222375-m03: (2.002697253s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.79s)

                                                
                                    
x
+
TestPreload (91.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-417710 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0927 00:52:08.923254  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-417710 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (53.994215118s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-417710 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-417710 image pull gcr.io/k8s-minikube/busybox: (1.544232332s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-417710
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-417710: (10.616631168s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-417710 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0927 00:53:11.332598  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-417710 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.110439799s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-417710 image list
helpers_test.go:175: Cleaning up "test-preload-417710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-417710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-417710: (2.186163529s)
--- PASS: TestPreload (91.65s)

                                                
                                    
x
+
TestScheduledStopUnix (96.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-081132 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-081132 --memory=2048 --driver=docker  --container-runtime=docker: (23.875116172s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-081132 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-081132 -n scheduled-stop-081132
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-081132 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 00:53:54.218011  540034 retry.go:31] will retry after 67.024µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.219163  540034 retry.go:31] will retry after 139.68µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.220292  540034 retry.go:31] will retry after 213.261µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.221421  540034 retry.go:31] will retry after 214.502µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.222560  540034 retry.go:31] will retry after 391.576µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.223691  540034 retry.go:31] will retry after 1.135495ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.225897  540034 retry.go:31] will retry after 664.484µs: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.227036  540034 retry.go:31] will retry after 2.059786ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.229170  540034 retry.go:31] will retry after 1.519761ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.231393  540034 retry.go:31] will retry after 5.125714ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.237616  540034 retry.go:31] will retry after 5.828466ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.243831  540034 retry.go:31] will retry after 7.460324ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.252078  540034 retry.go:31] will retry after 7.990757ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.260315  540034 retry.go:31] will retry after 18.094443ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
I0927 00:53:54.278525  540034 retry.go:31] will retry after 42.014044ms: open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/scheduled-stop-081132/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-081132 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-081132 -n scheduled-stop-081132
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-081132
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-081132 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-081132
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-081132: exit status 7 (66.398458ms)

                                                
                                                
-- stdout --
	scheduled-stop-081132
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-081132 -n scheduled-stop-081132
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-081132 -n scheduled-stop-081132: exit status 7 (65.166131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-081132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-081132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-081132: (1.604653523s)
--- PASS: TestScheduledStopUnix (96.77s)

                                                
                                    
x
+
TestSkaffold (100.1s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe790704112 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-178603 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-178603 --memory=2600 --driver=docker  --container-runtime=docker: (23.861233317s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe790704112 run --minikube-profile skaffold-178603 --kube-context skaffold-178603 --status-check=true --port-forward=false --interactive=false
E0927 00:55:45.853480  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe790704112 run --minikube-profile skaffold-178603 --kube-context skaffold-178603 --status-check=true --port-forward=false --interactive=false: (1m1.815626419s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-567c44f48b-nhqzx" [f5b4f3b1-bb65-469c-ac53-9385861a9884] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003723135s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-857c58df4d-xcmvv" [94db7b20-6064-4217-93c5-4a4bafbc6f8f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003343232s
helpers_test.go:175: Cleaning up "skaffold-178603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-178603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-178603: (2.786385643s)
--- PASS: TestSkaffold (100.10s)

                                                
                                    
x
+
TestInsufficientStorage (12.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-959669 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-959669 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.409449843s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8286737-e418-4896-a4fe-54f6618e9c96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-959669] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66182b48-b777-4287-9388-442acca6059c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"9a934b69-0fee-4c85-8d81-0f0db93c2933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66fbc23b-bbfb-4040-8586-f4c1fe497a9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig"}}
	{"specversion":"1.0","id":"387f1291-c21e-4487-bd44-b756f70fa190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube"}}
	{"specversion":"1.0","id":"e31f36ee-0a4e-4e7e-9e81-da5382e14acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ecbcf02c-7935-4535-8887-9388286353b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"436f3399-5d41-4269-a6ae-68a2db50e1cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"91043772-e65f-4458-a81e-30ba4d272470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2c70c711-4a24-4f3e-b9c6-a9ff7dce8b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"49ebc3ff-1384-4377-b839-c8d967e7ea18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"56b09b06-3adb-40e9-98bd-a6dbefdc0f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-959669\" primary control-plane node in \"insufficient-storage-959669\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"24cb7fec-08db-4ca5-bd89-1d6b93d599fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fb35476-089a-40e1-a220-23a5b86da1b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"30568a27-0865-4208-9401-bf7455b7a324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-959669 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-959669 --output=json --layout=cluster: exit status 7 (248.076085ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-959669","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-959669","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 00:56:57.473741  796316 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-959669" does not appear in /home/jenkins/minikube-integration/19711-533157/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-959669 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-959669 --output=json --layout=cluster: exit status 7 (245.893264ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-959669","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-959669","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 00:56:57.720155  796432 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-959669" does not appear in /home/jenkins/minikube-integration/19711-533157/kubeconfig
	E0927 00:56:57.729674  796432 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/insufficient-storage-959669/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-959669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-959669
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-959669: (1.620539026s)
--- PASS: TestInsufficientStorage (12.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2726342952 start -p running-upgrade-594880 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2726342952 start -p running-upgrade-594880 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.286552682s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-594880 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-594880 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.251945378s)
helpers_test.go:175: Cleaning up "running-upgrade-594880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-594880
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-594880: (2.374487203s)
--- PASS: TestRunningBinaryUpgrade (78.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.094371701s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-386204
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-386204: (10.666656786s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-386204 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-386204 status --format={{.Host}}: exit status 7 (78.415679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 00:58:11.332361  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m24.873594075s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-386204 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (70.439802ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-386204] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-386204
	    minikube start -p kubernetes-upgrade-386204 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3862042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-386204 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386204 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.723377822s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-386204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-386204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-386204: (2.176211095s)
--- PASS: TestKubernetesUpgrade (345.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1509418639 start -p missing-upgrade-188771 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1509418639 start -p missing-upgrade-188771 --memory=2200 --driver=docker  --container-runtime=docker: (1m18.890551191s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-188771
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-188771: (10.465559995s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-188771
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-188771 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-188771 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.122166125s)
helpers_test.go:175: Cleaning up "missing-upgrade-188771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-188771
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-188771: (2.227027049s)
--- PASS: TestMissingContainerUpgrade (149.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1103930261 start -p stopped-upgrade-502026 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1103930261 start -p stopped-upgrade-502026 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m18.028831262s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1103930261 -p stopped-upgrade-502026 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1103930261 -p stopped-upgrade-502026 stop: (10.733810522s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-502026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-502026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.641043853s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (112.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-502026
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-502026: (1.626404926s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.63s)

                                                
                                    
x
+
TestPause/serial/Start (89.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-570271 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-570271 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m29.730182024s)
--- PASS: TestPause/serial/Start (89.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.255157ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-632916] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-533157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-533157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-632916 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-632916 --driver=docker  --container-runtime=docker: (23.103199369s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-632916 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --driver=docker  --container-runtime=docker: (5.101807242s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-632916 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-632916 status -o json: exit status 2 (272.066235ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-632916","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-632916
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-632916: (1.719516002s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-632916 --no-kubernetes --driver=docker  --container-runtime=docker: (8.931680382s)
--- PASS: TestNoKubernetes/serial/Start (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-632916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-632916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.691631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.540242049s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-632916
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-632916: (1.234344061s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-632916 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-632916 --driver=docker  --container-runtime=docker: (6.756995319s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-632916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-632916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.816031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-570271 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-570271 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.235936245s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-570271 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-570271 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-570271 --output=json --layout=cluster: exit status 2 (331.213478ms)

                                                
                                                
-- stdout --
	{"Name":"pause-570271","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-570271","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.46s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-570271 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.46s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-570271 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-570271 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-570271 --alsologtostderr -v=5: (2.202846238s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.287922735s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-570271
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-570271: exit status 1 (18.276575ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-570271: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-118847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-118847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.403667805s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-818301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:01:33.024337  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.030746  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.042146  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.063676  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.105506  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.186841  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.348379  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:33.670590  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:34.311874  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:35.593658  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:38.155338  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:43.276856  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:53.518822  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-818301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.837439727s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-818301 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b4992819-6181-41b1-81e1-4d409e397d77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b4992819-6181-41b1-81e1-4d409e397d77] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003615666s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-818301 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-818301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-818301 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-818301 --alsologtostderr -v=3
E0927 01:02:14.000978  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-818301 --alsologtostderr -v=3: (10.826283663s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301: exit status 7 (131.768737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-818301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-818301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-818301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.489959744s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-464413 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:02:54.963049  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:03:11.331846  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-464413 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m11.947172982s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-118847 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35ea9956-8cfe-4419-8ef3-4534cf112e28] Pending
helpers_test.go:344: "busybox" [35ea9956-8cfe-4419-8ef3-4534cf112e28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35ea9956-8cfe-4419-8ef3-4534cf112e28] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003744223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-118847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-118847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-118847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-118847 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-118847 --alsologtostderr -v=3: (10.765155357s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-118847 -n old-k8s-version-118847
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-118847 -n old-k8s-version-118847: exit status 7 (66.530925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-118847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (141.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-118847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-118847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.959338402s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-118847 -n old-k8s-version-118847
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (141.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-464413 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4909425-838a-42ab-8c08-4ae615cff1e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c4909425-838a-42ab-8c08-4ae615cff1e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00388995s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-464413 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-464413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-464413 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-464413 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-464413 --alsologtostderr -v=3: (10.742309014s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464413 -n embed-certs-464413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464413 -n embed-certs-464413: exit status 7 (71.88638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-464413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0927 01:04:16.885217  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-464413 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-464413 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.474650509s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464413 -n embed-certs-464413
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-969732 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:05:45.853431  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-969732 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m16.513729375s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969732 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [45c5808b-f8f5-4f46-9c6a-09d9c7bfdf07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [45c5808b-f8f5-4f46-9c6a-09d9c7bfdf07] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004337863s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969732 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7rshp" [c453d3df-730c-42d6-86ff-160bbc27f1d8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004440016s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7rshp" [c453d3df-730c-42d6-86ff-160bbc27f1d8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003101746s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-118847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-969732 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-969732 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-969732 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-969732 --alsologtostderr -v=3: (11.59217131s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-118847 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-118847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-118847 -n old-k8s-version-118847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-118847 -n old-k8s-version-118847: exit status 2 (284.547613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-118847 -n old-k8s-version-118847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-118847 -n old-k8s-version-118847: exit status 2 (287.43869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-118847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-118847 -n old-k8s-version-118847
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-118847 -n old-k8s-version-118847
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (27.576748885s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-969732 -n no-preload-969732
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-969732 -n no-preload-969732: exit status 7 (79.269531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-969732 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (264.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-969732 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:06:33.024173  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-969732 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.909793089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-969732 -n no-preload-969732
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (264.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5w69m" [a89e0548-649a-4abc-8039-40aa0d787bf3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003424765s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-202653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-202653 --alsologtostderr -v=3: (11.023167572s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5w69m" [a89e0548-649a-4abc-8039-40aa0d787bf3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00387126s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-818301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-818301 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-818301 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301: exit status 2 (293.027907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301: exit status 2 (409.846476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-818301 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-818301 -n default-k8s-diff-port-818301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202653 -n newest-cni-202653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202653 -n newest-cni-202653: exit status 7 (75.579486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-202653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (16.328161297s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202653 -n newest-cni-202653
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (37.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0927 01:06:59.539302  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.545732  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.557261  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.579453  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.621516  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.703596  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:59.865246  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:00.186933  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:00.726753  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/skaffold-178603/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:00.829301  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:02.111173  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:04.672582  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:09.794143  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (37.639025151s)
--- PASS: TestNetworkPlugins/group/auto/Start (37.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-202653 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-202653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202653 -n newest-cni-202653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202653 -n newest-cni-202653: exit status 2 (304.391051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202653 -n newest-cni-202653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202653 -n newest-cni-202653: exit status 2 (283.51285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-202653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202653 -n newest-cni-202653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202653 -n newest-cni-202653
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0927 01:07:20.036475  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (56.588362295s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-248977 "pgrep -a kubelet"
I0927 01:07:36.665056  540034 config.go:182] Loaded profile config "auto-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-58w5p" [31ca6982-18f4-4bc3-8317-0831932b1a22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:07:40.518006  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-58w5p" [31ca6982-18f4-4bc3-8317-0831932b1a22] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004155224s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0927 01:08:11.331576  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/addons-305811/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (43.658277496s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-49lfd" [1d079779-deaf-422f-9ca0-208b1e33aa0d] Running
E0927 01:08:17.023569  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.029963  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.041308  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.062726  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.104277  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.186388  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.348289  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:17.670605  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:18.311963  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003994763s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-248977 "pgrep -a kubelet"
E0927 01:08:19.593400  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
I0927 01:08:19.612850  540034 config.go:182] Loaded profile config "calico-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g69bk" [a5f539cc-7a00-4a0a-9770-841084350daa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:08:21.480000  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:22.155234  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-g69bk" [a5f539cc-7a00-4a0a-9770-841084350daa] Running
E0927 01:08:27.277071  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004020921s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qsxqc" [6b1ee5a8-0b34-4aed-954a-225a1ec2d473] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004641181s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qsxqc" [6b1ee5a8-0b34-4aed-954a-225a1ec2d473] Running
E0927 01:08:48.924666  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003867381s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-464413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (42.043191849s)
--- PASS: TestNetworkPlugins/group/false/Start (42.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-248977 "pgrep -a kubelet"
I0927 01:08:50.693668  540034 config.go:182] Loaded profile config "custom-flannel-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kbn44" [b25ceb69-5b1c-4368-b697-edc90d94ba58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kbn44" [b25ceb69-5b1c-4368-b697-edc90d94ba58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.031756178s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-464413 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-464413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464413 -n embed-certs-464413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464413 -n embed-certs-464413: exit status 2 (351.35763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464413 -n embed-certs-464413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464413 -n embed-certs-464413: exit status 2 (340.92634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-464413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464413 -n embed-certs-464413
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464413 -n embed-certs-464413
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0927 01:08:58.001105  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m1.440774044s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (46.765647533s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-248977 "pgrep -a kubelet"
I0927 01:09:31.519862  540034 config.go:182] Loaded profile config "false-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n4ggv" [03a483cd-8d5d-41b6-9d28-bad36471305b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n4ggv" [03a483cd-8d5d-41b6-9d28-bad36471305b] Running
E0927 01:09:38.962508  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.005462288s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h4pd2" [922bcab2-eeeb-4569-a5c9-a18fe4b6e504] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004914436s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m6.918602288s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-248977 "pgrep -a kubelet"
I0927 01:10:05.467742  540034 config.go:182] Loaded profile config "kindnet-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-248977 replace --force -f testdata/netcat-deployment.yaml
I0927 01:10:05.795607  540034 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0927 01:10:05.888798  540034 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lvk54" [0ba8591d-b0c3-4a21-bab8-585374f049f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lvk54" [0ba8591d-b0c3-4a21-bab8-585374f049f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004264975s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r2rz6" [711f0156-a41d-41ad-92ed-28b888d583ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0046737s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-248977 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xj458" [d60ba914-86f1-43a7-b238-8ed74e418f69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xj458" [d60ba914-86f1-43a7-b238-8ed74e418f69] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004319251s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.492269417s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qrvm8" [cc5606ea-088c-4a2f-8cca-bcfa99cc467d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004486824s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (66.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0927 01:10:45.853529  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/functional-860089/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-248977 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m6.417469018s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (66.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qrvm8" [cc5606ea-088c-4a2f-8cca-bcfa99cc467d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00423226s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-969732 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-969732 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-969732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-969732 -n no-preload-969732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-969732 -n no-preload-969732: exit status 2 (342.39362ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-969732 -n no-preload-969732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-969732 -n no-preload-969732: exit status 2 (361.366141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-969732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-969732 -n no-preload-969732
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-969732 -n no-preload-969732
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.77s)
E0927 01:11:00.883968  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/old-k8s-version-118847/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-248977 "pgrep -a kubelet"
I0927 01:11:08.027794  540034 config.go:182] Loaded profile config "enable-default-cni-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vpmzb" [4f3bba1a-b506-4e8a-bf99-34beb11be8ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vpmzb" [4f3bba1a-b506-4e8a-bf99-34beb11be8ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003539017s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-248977 "pgrep -a kubelet"
I0927 01:11:42.793717  540034 config.go:182] Loaded profile config "bridge-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4l9wf" [21aa41da-9486-4a3e-a3dc-9e10db5d0497] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4l9wf" [21aa41da-9486-4a3e-a3dc-9e10db5d0497] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004464629s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-248977 "pgrep -a kubelet"
I0927 01:11:51.670310  540034 config.go:182] Loaded profile config "kubenet-248977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-248977 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b5sg2" [174fbc0e-2000-4521-9652-a0bc99250071] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b5sg2" [174fbc0e-2000-4521-9652-a0bc99250071] Running
E0927 01:11:59.539455  540034 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/default-k8s-diff-port-818301/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004199608s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-248977 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-248977 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-083203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-083203
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-248977 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-248977" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-533157/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 00:58:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-386204
contexts:
- context:
cluster: kubernetes-upgrade-386204
user: kubernetes-upgrade-386204
name: kubernetes-upgrade-386204
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-386204
user:
client-certificate: /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/kubernetes-upgrade-386204/client.crt
client-key: /home/jenkins/minikube-integration/19711-533157/.minikube/profiles/kubernetes-upgrade-386204/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-248977

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-248977" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248977"

                                                
                                                
----------------------- debugLogs end: cilium-248977 [took: 3.78654135s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-248977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-248977
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
Copied to clipboard