Test Report: Docker_Linux 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 73.43
x
+
TestAddons/parallel/Registry (73.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.083819ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-sswjh" [42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002711s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w6x4v" [8964449b-425a-4614-aa7b-d6cc98a185c7] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003081335s
addons_test.go:338: (dbg) Run:  kubectl --context addons-071702 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-071702 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-071702 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.070783686s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-071702 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 ip
2024/09/23 10:34:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-071702
helpers_test.go:235: (dbg) docker inspect addons-071702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742",
	        "Created": "2024-09-23T10:21:08.61285066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:21:08.745508485Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742/hostname",
	        "HostsPath": "/var/lib/docker/containers/8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742/hosts",
	        "LogPath": "/var/lib/docker/containers/8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742/8966179ecdfd2cc670eb136a4fc91620be24b5bc1984967e12bcafcacc397742-json.log",
	        "Name": "/addons-071702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-071702:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-071702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/588b74eec0c8446e03e984eb4a9fe9b0c59ab6a00da3f5b0e38ccf11992a439d-init/diff:/var/lib/docker/overlay2/8ca2de7d8b65e2bda8878f4a091fa97667b4eaea3c506fec5159a312eef51d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/588b74eec0c8446e03e984eb4a9fe9b0c59ab6a00da3f5b0e38ccf11992a439d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/588b74eec0c8446e03e984eb4a9fe9b0c59ab6a00da3f5b0e38ccf11992a439d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/588b74eec0c8446e03e984eb4a9fe9b0c59ab6a00da3f5b0e38ccf11992a439d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-071702",
	                "Source": "/var/lib/docker/volumes/addons-071702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-071702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-071702",
	                "name.minikube.sigs.k8s.io": "addons-071702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9521a3bbbe15abd113d8b278ff9250adfe953503d9dbe0e939d74eee71fe8bdb",
	            "SandboxKey": "/var/run/docker/netns/9521a3bbbe15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-071702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5b08ae3e0bfe730e82589f0f8972b14608d300a082908684ee88145be463a3d9",
	                    "EndpointID": "9437440e9ebe63305f83fecb77205cfc1ee9f9a9a62037fee8d67e7161a1e372",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-071702",
	                        "8966179ecdfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-071702 -n addons-071702
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-381840 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | download-docker-381840                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-381840                                                                   | download-docker-381840 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-885431   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | binary-mirror-885431                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41549                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-885431                                                                     | binary-mirror-885431   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| addons  | disable dashboard -p                                                                        | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | addons-071702                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | addons-071702                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-071702 --wait=true                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-071702 ssh cat                                                                       | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | /opt/local-path-provisioner/pvc-1f21215f-8da6-4e9e-aa33-1db8504ddfb9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | -p addons-071702                                                                            |                        |         |         |                     |                     |
	| addons  | addons-071702 addons                                                                        | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | -p addons-071702                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-071702                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-071702                                                                               |                        |         |         |                     |                     |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-071702 ssh curl -s                                                                   | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-071702 ip                                                                            | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-071702 addons                                                                        | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-071702 addons                                                                        | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-071702 ip                                                                            | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	| addons  | addons-071702 addons disable                                                                | addons-071702          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:45.379792   11849 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:45.379898   11849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:45.379908   11849 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:45.379915   11849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:45.380088   11849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:20:45.380668   11849 out.go:352] Setting JSON to false
	I0923 10:20:45.381461   11849 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":194,"bootTime":1727086651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:20:45.381516   11849 start.go:139] virtualization: kvm guest
	I0923 10:20:45.383461   11849 out.go:177] * [addons-071702] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:20:45.384765   11849 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:20:45.384768   11849 notify.go:220] Checking for updates...
	I0923 10:20:45.386044   11849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:45.387372   11849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:20:45.388643   11849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	I0923 10:20:45.389715   11849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:20:45.390753   11849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:20:45.392010   11849 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:45.413497   11849 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:45.413571   11849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:45.461916   11849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:20:45.452964694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:45.462013   11849 docker.go:318] overlay module found
	I0923 10:20:45.463742   11849 out.go:177] * Using the docker driver based on user configuration
	I0923 10:20:45.465001   11849 start.go:297] selected driver: docker
	I0923 10:20:45.465018   11849 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:45.465031   11849 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:20:45.465784   11849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:45.509849   11849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:20:45.501529913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:45.510006   11849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:45.510229   11849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:20:45.512098   11849 out.go:177] * Using Docker driver with root privileges
	I0923 10:20:45.513528   11849 cni.go:84] Creating CNI manager for ""
	I0923 10:20:45.513585   11849 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:20:45.513596   11849 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:20:45.513653   11849 start.go:340] cluster config:
	{Name:addons-071702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-071702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:45.515050   11849 out.go:177] * Starting "addons-071702" primary control-plane node in "addons-071702" cluster
	I0923 10:20:45.516260   11849 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:20:45.517542   11849 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:20:45.518872   11849 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:20:45.518906   11849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 10:20:45.518914   11849 cache.go:56] Caching tarball of preloaded images
	I0923 10:20:45.518973   11849 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:20:45.518993   11849 preload.go:172] Found /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 10:20:45.519004   11849 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 10:20:45.519331   11849 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/config.json ...
	I0923 10:20:45.519363   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/config.json: {Name:mk608206dd87a06e8de5d5ff517c3808c2af70c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:20:45.534568   11849 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:45.534663   11849 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:20:45.534707   11849 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:20:45.534714   11849 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:20:45.534725   11849 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:20:45.534733   11849 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:20:57.266177   11849 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:20:57.266209   11849 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:20:57.266256   11849 start.go:360] acquireMachinesLock for addons-071702: {Name:mk685407b574de814a7d843dc9648ed76ce90d19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:20:57.266360   11849 start.go:364] duration metric: took 81.327µs to acquireMachinesLock for "addons-071702"
	I0923 10:20:57.266384   11849 start.go:93] Provisioning new machine with config: &{Name:addons-071702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-071702 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:20:57.266501   11849 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:20:57.268448   11849 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:20:57.268719   11849 start.go:159] libmachine.API.Create for "addons-071702" (driver="docker")
	I0923 10:20:57.268748   11849 client.go:168] LocalClient.Create starting
	I0923 10:20:57.268888   11849 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem
	I0923 10:20:57.348092   11849 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/cert.pem
	I0923 10:20:57.416102   11849 cli_runner.go:164] Run: docker network inspect addons-071702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:20:57.431337   11849 cli_runner.go:211] docker network inspect addons-071702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:20:57.431398   11849 network_create.go:284] running [docker network inspect addons-071702] to gather additional debugging logs...
	I0923 10:20:57.431415   11849 cli_runner.go:164] Run: docker network inspect addons-071702
	W0923 10:20:57.446403   11849 cli_runner.go:211] docker network inspect addons-071702 returned with exit code 1
	I0923 10:20:57.446435   11849 network_create.go:287] error running [docker network inspect addons-071702]: docker network inspect addons-071702: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-071702 not found
	I0923 10:20:57.446446   11849 network_create.go:289] output of [docker network inspect addons-071702]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-071702 not found
	
	** /stderr **
	I0923 10:20:57.446514   11849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:20:57.461459   11849 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef3ee0}
	I0923 10:20:57.461498   11849 network_create.go:124] attempt to create docker network addons-071702 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:20:57.461535   11849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-071702 addons-071702
	I0923 10:20:57.520570   11849 network_create.go:108] docker network addons-071702 192.168.49.0/24 created
	I0923 10:20:57.520601   11849 kic.go:121] calculated static IP "192.168.49.2" for the "addons-071702" container
	I0923 10:20:57.520672   11849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:20:57.534541   11849 cli_runner.go:164] Run: docker volume create addons-071702 --label name.minikube.sigs.k8s.io=addons-071702 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:20:57.550415   11849 oci.go:103] Successfully created a docker volume addons-071702
	I0923 10:20:57.550483   11849 cli_runner.go:164] Run: docker run --rm --name addons-071702-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-071702 --entrypoint /usr/bin/test -v addons-071702:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:21:04.731475   11849 cli_runner.go:217] Completed: docker run --rm --name addons-071702-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-071702 --entrypoint /usr/bin/test -v addons-071702:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (7.180951038s)
	I0923 10:21:04.731504   11849 oci.go:107] Successfully prepared a docker volume addons-071702
	I0923 10:21:04.731528   11849 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:04.731546   11849 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:21:04.731619   11849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-071702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:21:08.556706   11849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-071702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.825051884s)
	I0923 10:21:08.556734   11849 kic.go:203] duration metric: took 3.82518497s to extract preloaded images to volume ...
	W0923 10:21:08.556839   11849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:21:08.556927   11849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:21:08.599105   11849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-071702 --name addons-071702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-071702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-071702 --network addons-071702 --ip 192.168.49.2 --volume addons-071702:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:21:08.901402   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Running}}
	I0923 10:21:08.919284   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:08.937298   11849 cli_runner.go:164] Run: docker exec addons-071702 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:21:08.976849   11849 oci.go:144] the created container "addons-071702" has a running status.
	I0923 10:21:08.976882   11849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa...
	I0923 10:21:09.142646   11849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:21:09.162592   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:09.180564   11849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:21:09.180587   11849 kic_runner.go:114] Args: [docker exec --privileged addons-071702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:21:09.256971   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:09.280370   11849 machine.go:93] provisionDockerMachine start ...
	I0923 10:21:09.280469   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:09.297385   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:09.297572   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:09.297593   11849 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:21:09.529772   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-071702
	
	I0923 10:21:09.529797   11849 ubuntu.go:169] provisioning hostname "addons-071702"
	I0923 10:21:09.529846   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:09.546989   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:09.547157   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:09.547179   11849 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-071702 && echo "addons-071702" | sudo tee /etc/hostname
	I0923 10:21:09.683911   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-071702
	
	I0923 10:21:09.683973   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:09.699615   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:09.699819   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:09.699843   11849 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-071702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-071702/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-071702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:21:09.826432   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:21:09.826458   11849 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3716/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3716/.minikube}
	I0923 10:21:09.826478   11849 ubuntu.go:177] setting up certificates
	I0923 10:21:09.826489   11849 provision.go:84] configureAuth start
	I0923 10:21:09.826549   11849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-071702
	I0923 10:21:09.841763   11849 provision.go:143] copyHostCerts
	I0923 10:21:09.841825   11849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3716/.minikube/ca.pem (1082 bytes)
	I0923 10:21:09.841935   11849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3716/.minikube/cert.pem (1123 bytes)
	I0923 10:21:09.841991   11849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3716/.minikube/key.pem (1675 bytes)
	I0923 10:21:09.842044   11849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca-key.pem org=jenkins.addons-071702 san=[127.0.0.1 192.168.49.2 addons-071702 localhost minikube]
	I0923 10:21:10.034874   11849 provision.go:177] copyRemoteCerts
	I0923 10:21:10.034933   11849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:21:10.034967   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:10.051434   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:10.142459   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:21:10.162896   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:21:10.183128   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:21:10.203345   11849 provision.go:87] duration metric: took 376.846086ms to configureAuth
	I0923 10:21:10.203367   11849 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:21:10.203514   11849 config.go:182] Loaded profile config "addons-071702": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:10.203558   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:10.219624   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:10.219809   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:10.219825   11849 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 10:21:10.346611   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 10:21:10.346637   11849 ubuntu.go:71] root file system type: overlay
	I0923 10:21:10.346768   11849 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 10:21:10.346841   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:10.363783   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:10.363956   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:10.364013   11849 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 10:21:10.500312   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 10:21:10.500413   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:10.516123   11849 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:10.516286   11849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:10.516304   11849 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 10:21:11.173964   11849 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 10:21:10.496883873 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 10:21:11.173996   11849 machine.go:96] duration metric: took 1.89359565s to provisionDockerMachine
	I0923 10:21:11.174006   11849 client.go:171] duration metric: took 13.905248575s to LocalClient.Create
	I0923 10:21:11.174025   11849 start.go:167] duration metric: took 13.905305599s to libmachine.API.Create "addons-071702"
	I0923 10:21:11.174034   11849 start.go:293] postStartSetup for "addons-071702" (driver="docker")
	I0923 10:21:11.174049   11849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:21:11.174099   11849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:21:11.174131   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:11.190914   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:11.282838   11849 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:21:11.285665   11849 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:21:11.285692   11849 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:21:11.285700   11849 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:21:11.285707   11849 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:21:11.285717   11849 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3716/.minikube/addons for local assets ...
	I0923 10:21:11.285777   11849 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3716/.minikube/files for local assets ...
	I0923 10:21:11.285807   11849 start.go:296] duration metric: took 111.766416ms for postStartSetup
	I0923 10:21:11.286130   11849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-071702
	I0923 10:21:11.301750   11849 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/config.json ...
	I0923 10:21:11.302043   11849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:21:11.302094   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:11.317532   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:11.403025   11849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:21:11.406669   11849 start.go:128] duration metric: took 14.140131943s to createHost
	I0923 10:21:11.406702   11849 start.go:83] releasing machines lock for "addons-071702", held for 14.140329172s
	I0923 10:21:11.406766   11849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-071702
	I0923 10:21:11.422107   11849 ssh_runner.go:195] Run: cat /version.json
	I0923 10:21:11.422144   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:11.422202   11849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:21:11.422257   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:11.439064   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:11.440405   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:11.526000   11849 ssh_runner.go:195] Run: systemctl --version
	I0923 10:21:11.595893   11849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:21:11.599940   11849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 10:21:11.620610   11849 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:21:11.620684   11849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:21:11.643988   11849 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:21:11.644015   11849 start.go:495] detecting cgroup driver to use...
	I0923 10:21:11.644048   11849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:11.644153   11849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:11.657421   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:21:11.665590   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:21:11.673416   11849 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:21:11.673466   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:21:11.681235   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:11.688809   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:21:11.696405   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:11.703979   11849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:21:11.711187   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:21:11.718827   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:21:11.726625   11849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:21:11.734392   11849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:21:11.741107   11849 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:21:11.741160   11849 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:21:11.752971   11849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:21:11.760136   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:11.834146   11849 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 10:21:11.900360   11849 start.go:495] detecting cgroup driver to use...
	I0923 10:21:11.900407   11849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:11.900454   11849 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 10:21:11.910698   11849 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 10:21:11.910756   11849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 10:21:11.921463   11849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:11.935676   11849 ssh_runner.go:195] Run: which cri-dockerd
	I0923 10:21:11.938624   11849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:21:11.946474   11849 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:21:11.961963   11849 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 10:21:12.032080   11849 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 10:21:12.112795   11849 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:21:12.112946   11849 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:21:12.128807   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:12.218962   11849 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 10:21:12.459347   11849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:21:12.469507   11849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:12.479146   11849 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:21:12.554718   11849 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 10:21:12.626497   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:12.698106   11849 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 10:21:12.709488   11849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:12.718372   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:12.795130   11849 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 10:21:12.850109   11849 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:21:12.850200   11849 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 10:21:12.853621   11849 start.go:563] Will wait 60s for crictl version
	I0923 10:21:12.853670   11849 ssh_runner.go:195] Run: which crictl
	I0923 10:21:12.856575   11849 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:21:12.886787   11849 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 10:21:12.886846   11849 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:21:12.908439   11849 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:21:12.931467   11849 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 10:21:12.931544   11849 cli_runner.go:164] Run: docker network inspect addons-071702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:12.948273   11849 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:21:12.951389   11849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:21:12.960618   11849 kubeadm.go:883] updating cluster {Name:addons-071702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-071702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:21:12.960721   11849 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:12.960757   11849 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:21:12.977949   11849 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:21:12.977970   11849 docker.go:615] Images already preloaded, skipping extraction
	I0923 10:21:12.978030   11849 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:21:12.995809   11849 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:21:12.995833   11849 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:21:12.995843   11849 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 10:21:12.995929   11849 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-071702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-071702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:21:12.995978   11849 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 10:21:13.037067   11849 cni.go:84] Creating CNI manager for ""
	I0923 10:21:13.037100   11849 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:13.037114   11849 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:21:13.037135   11849 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-071702 NodeName:addons-071702 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:21:13.037256   11849 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-071702"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:21:13.037304   11849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:21:13.044861   11849 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:21:13.044930   11849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:21:13.052089   11849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:21:13.066664   11849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:21:13.081453   11849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 10:21:13.096677   11849 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:21:13.099620   11849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:21:13.108726   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:13.179081   11849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:21:13.190587   11849 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702 for IP: 192.168.49.2
	I0923 10:21:13.190604   11849 certs.go:194] generating shared ca certs ...
	I0923 10:21:13.190621   11849 certs.go:226] acquiring lock for ca certs: {Name:mk6ee9a202179db9ed63e6a3182344c97ea3d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.190755   11849 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3716/.minikube/ca.key
	I0923 10:21:13.281002   11849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3716/.minikube/ca.crt ...
	I0923 10:21:13.281031   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/ca.crt: {Name:mk2fc5f01f85cfa27ad99268606091a3add4e34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.281185   11849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3716/.minikube/ca.key ...
	I0923 10:21:13.281196   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/ca.key: {Name:mk43bd82b69f9d9d83810a95125b2d656f55fea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.281267   11849 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.key
	I0923 10:21:13.369230   11849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.crt ...
	I0923 10:21:13.369258   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.crt: {Name:mk874ffb9fa0ee62ad3f9f8555749889b96c2bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.369406   11849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.key ...
	I0923 10:21:13.369417   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.key: {Name:mk6e640e370418bc60001436b4c9990d7154f49d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.369485   11849 certs.go:256] generating profile certs ...
	I0923 10:21:13.369547   11849 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.key
	I0923 10:21:13.369558   11849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt with IP's: []
	I0923 10:21:13.513258   11849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt ...
	I0923 10:21:13.513286   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: {Name:mk4b1aa4ced3db79080162b70201af0fb22562d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.513440   11849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.key ...
	I0923 10:21:13.513450   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.key: {Name:mkcb27a071b7719cfb248c04e418edd1d24ef2a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.513513   11849 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key.a45bc325
	I0923 10:21:13.513529   11849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt.a45bc325 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:21:13.697552   11849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt.a45bc325 ...
	I0923 10:21:13.697579   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt.a45bc325: {Name:mkcca7dc77b9cc88c698c1d8e272691703ff1324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.697717   11849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key.a45bc325 ...
	I0923 10:21:13.697729   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key.a45bc325: {Name:mkb8b4732547ee15186be8455d9101c5a1a84134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.697792   11849 certs.go:381] copying /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt.a45bc325 -> /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt
	I0923 10:21:13.697862   11849 certs.go:385] copying /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key.a45bc325 -> /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key
	I0923 10:21:13.697905   11849 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.key
	I0923 10:21:13.697920   11849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.crt with IP's: []
	I0923 10:21:13.869119   11849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.crt ...
	I0923 10:21:13.869146   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.crt: {Name:mk0a25112ffa3e7adb247c14f82a0442ecae8d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.869299   11849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.key ...
	I0923 10:21:13.869313   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.key: {Name:mkd2a6d29bddf8d397e07273635613eea89116a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:13.869489   11849 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 10:21:13.869520   11849 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/ca.pem (1082 bytes)
	I0923 10:21:13.869544   11849 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:21:13.869565   11849 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3716/.minikube/certs/key.pem (1675 bytes)
	I0923 10:21:13.870082   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:21:13.891405   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:21:13.910799   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:21:13.930070   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:21:13.949590   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:21:13.969164   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:21:13.988605   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:21:14.007985   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:21:14.027133   11849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:21:14.046546   11849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:21:14.060793   11849 ssh_runner.go:195] Run: openssl version
	I0923 10:21:14.065362   11849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:21:14.072919   11849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:14.075650   11849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:14.075695   11849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:14.081431   11849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:21:14.088966   11849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:21:14.091650   11849 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:21:14.091690   11849 kubeadm.go:392] StartCluster: {Name:addons-071702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-071702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:14.091817   11849 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:21:14.107705   11849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:21:14.115156   11849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:21:14.122487   11849 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:21:14.122531   11849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:21:14.129530   11849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:21:14.129547   11849 kubeadm.go:157] found existing configuration files:
	
	I0923 10:21:14.129583   11849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:21:14.136461   11849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:21:14.136496   11849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:21:14.143180   11849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:21:14.149913   11849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:21:14.149959   11849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:21:14.156579   11849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:21:14.163604   11849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:21:14.163642   11849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:21:14.170388   11849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:21:14.177229   11849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:21:14.177267   11849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:21:14.184108   11849 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:21:14.214323   11849 kubeadm.go:310] W0923 10:21:14.213757    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:14.214809   11849 kubeadm.go:310] W0923 10:21:14.214356    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:14.232987   11849 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0923 10:21:14.281745   11849 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:21:24.003974   11849 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:21:24.004048   11849 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:21:24.004151   11849 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:21:24.004236   11849 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0923 10:21:24.004298   11849 kubeadm.go:310] OS: Linux
	I0923 10:21:24.004364   11849 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:21:24.004434   11849 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:21:24.004506   11849 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:21:24.004577   11849 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:21:24.004628   11849 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:21:24.004674   11849 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:21:24.004715   11849 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:21:24.004756   11849 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:21:24.004795   11849 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:21:24.004860   11849 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:21:24.004953   11849 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:21:24.005049   11849 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:21:24.005121   11849 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:21:24.006841   11849 out.go:235]   - Generating certificates and keys ...
	I0923 10:21:24.006934   11849 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:21:24.007019   11849 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:21:24.007108   11849 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:21:24.007195   11849 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:21:24.007289   11849 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:21:24.007354   11849 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:21:24.007434   11849 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:21:24.007542   11849 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-071702 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:21:24.007591   11849 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:21:24.007709   11849 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-071702 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:21:24.007775   11849 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:21:24.007832   11849 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:21:24.007870   11849 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:21:24.007917   11849 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:21:24.007960   11849 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:21:24.008019   11849 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:21:24.008069   11849 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:21:24.008126   11849 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:21:24.008172   11849 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:21:24.008255   11849 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:21:24.008344   11849 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:21:24.009630   11849 out.go:235]   - Booting up control plane ...
	I0923 10:21:24.009707   11849 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:21:24.009785   11849 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:21:24.009859   11849 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:21:24.009957   11849 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:21:24.010042   11849 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:21:24.010076   11849 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:21:24.010254   11849 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:21:24.010348   11849 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:21:24.010398   11849 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000889784s
	I0923 10:21:24.010461   11849 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:21:24.010518   11849 kubeadm.go:310] [api-check] The API server is healthy after 4.501155743s
	I0923 10:21:24.010604   11849 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:21:24.010736   11849 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:21:24.010804   11849 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:21:24.011016   11849 kubeadm.go:310] [mark-control-plane] Marking the node addons-071702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:21:24.011066   11849 kubeadm.go:310] [bootstrap-token] Using token: q08hz0.03uhaekc526vsuab
	I0923 10:21:24.012416   11849 out.go:235]   - Configuring RBAC rules ...
	I0923 10:21:24.012551   11849 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:21:24.012680   11849 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:21:24.012842   11849 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:21:24.013030   11849 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:21:24.013197   11849 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:21:24.013321   11849 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:21:24.013467   11849 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:21:24.013527   11849 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:21:24.013588   11849 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:21:24.013597   11849 kubeadm.go:310] 
	I0923 10:21:24.013675   11849 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:21:24.013685   11849 kubeadm.go:310] 
	I0923 10:21:24.013754   11849 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:21:24.013762   11849 kubeadm.go:310] 
	I0923 10:21:24.013798   11849 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:21:24.013893   11849 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:21:24.013968   11849 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:21:24.013977   11849 kubeadm.go:310] 
	I0923 10:21:24.014048   11849 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:21:24.014063   11849 kubeadm.go:310] 
	I0923 10:21:24.014146   11849 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:21:24.014165   11849 kubeadm.go:310] 
	I0923 10:21:24.014246   11849 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:21:24.014352   11849 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:21:24.014450   11849 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:21:24.014460   11849 kubeadm.go:310] 
	I0923 10:21:24.014577   11849 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:21:24.014741   11849 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:21:24.014753   11849 kubeadm.go:310] 
	I0923 10:21:24.014870   11849 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q08hz0.03uhaekc526vsuab \
	I0923 10:21:24.015017   11849 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d03b657b152f798f7054c01bcc82223b1aa4fcdc63266f7dbd1161e47a64a65 \
	I0923 10:21:24.015058   11849 kubeadm.go:310] 	--control-plane 
	I0923 10:21:24.015066   11849 kubeadm.go:310] 
	I0923 10:21:24.015145   11849 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:21:24.015152   11849 kubeadm.go:310] 
	I0923 10:21:24.015218   11849 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q08hz0.03uhaekc526vsuab \
	I0923 10:21:24.015324   11849 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d03b657b152f798f7054c01bcc82223b1aa4fcdc63266f7dbd1161e47a64a65 
	I0923 10:21:24.015338   11849 cni.go:84] Creating CNI manager for ""
	I0923 10:21:24.015350   11849 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:24.016872   11849 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:21:24.017957   11849 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:21:24.025767   11849 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:21:24.040799   11849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:21:24.040863   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:24.040880   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-071702 minikube.k8s.io/updated_at=2024_09_23T10_21_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-071702 minikube.k8s.io/primary=true
	I0923 10:21:24.047434   11849 ops.go:34] apiserver oom_adj: -16
	I0923 10:21:24.121098   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:24.621360   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:25.121323   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:25.621391   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:26.121269   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:26.621676   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:27.121875   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:27.622100   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:28.121835   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:28.621161   11849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:28.706836   11849 kubeadm.go:1113] duration metric: took 4.666029899s to wait for elevateKubeSystemPrivileges
	I0923 10:21:28.706874   11849 kubeadm.go:394] duration metric: took 14.615187242s to StartCluster
	I0923 10:21:28.706917   11849 settings.go:142] acquiring lock: {Name:mka450178266ead0466f3a326c9a6756b4479447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:28.707014   11849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:21:28.707330   11849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/kubeconfig: {Name:mk679719faf37a9364b3938ba88d54cbed720fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:28.707555   11849 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:21:28.707665   11849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:21:28.707665   11849 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:21:28.707783   11849 addons.go:69] Setting yakd=true in profile "addons-071702"
	I0923 10:21:28.707793   11849 addons.go:69] Setting inspektor-gadget=true in profile "addons-071702"
	I0923 10:21:28.707893   11849 addons.go:234] Setting addon inspektor-gadget=true in "addons-071702"
	I0923 10:21:28.707892   11849 addons.go:69] Setting storage-provisioner=true in profile "addons-071702"
	I0923 10:21:28.707916   11849 addons.go:234] Setting addon storage-provisioner=true in "addons-071702"
	I0923 10:21:28.707934   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707952   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707805   11849 addons.go:234] Setting addon yakd=true in "addons-071702"
	I0923 10:21:28.708022   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707814   11849 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-071702"
	I0923 10:21:28.707822   11849 addons.go:69] Setting volcano=true in profile "addons-071702"
	I0923 10:21:28.708101   11849 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-071702"
	I0923 10:21:28.708105   11849 addons.go:234] Setting addon volcano=true in "addons-071702"
	I0923 10:21:28.707842   11849 addons.go:69] Setting default-storageclass=true in profile "addons-071702"
	I0923 10:21:28.708237   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.708254   11849 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-071702"
	I0923 10:21:28.708389   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.708519   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.707822   11849 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-071702"
	I0923 10:21:28.708523   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.708645   11849 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-071702"
	I0923 10:21:28.708673   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.707849   11849 addons.go:69] Setting cloud-spanner=true in profile "addons-071702"
	I0923 10:21:28.708889   11849 addons.go:234] Setting addon cloud-spanner=true in "addons-071702"
	I0923 10:21:28.708921   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707857   11849 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-071702"
	I0923 10:21:28.708986   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.709020   11849 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-071702"
	I0923 10:21:28.709048   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.709363   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.709469   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.707857   11849 addons.go:69] Setting ingress=true in profile "addons-071702"
	I0923 10:21:28.709719   11849 addons.go:234] Setting addon ingress=true in "addons-071702"
	I0923 10:21:28.709769   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.710176   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.708675   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707861   11849 addons.go:69] Setting registry=true in profile "addons-071702"
	I0923 10:21:28.710508   11849 addons.go:234] Setting addon registry=true in "addons-071702"
	I0923 10:21:28.710555   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.710840   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.711098   11849 out.go:177] * Verifying Kubernetes components...
	I0923 10:21:28.711361   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.707868   11849 addons.go:69] Setting ingress-dns=true in profile "addons-071702"
	I0923 10:21:28.711525   11849 addons.go:234] Setting addon ingress-dns=true in "addons-071702"
	I0923 10:21:28.711597   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.707860   11849 addons.go:69] Setting gcp-auth=true in profile "addons-071702"
	I0923 10:21:28.707830   11849 addons.go:69] Setting volumesnapshots=true in profile "addons-071702"
	I0923 10:21:28.708531   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.707845   11849 config.go:182] Loaded profile config "addons-071702": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:28.707826   11849 addons.go:69] Setting metrics-server=true in profile "addons-071702"
	I0923 10:21:28.711830   11849 addons.go:234] Setting addon metrics-server=true in "addons-071702"
	I0923 10:21:28.711883   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.712438   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.712595   11849 addons.go:234] Setting addon volumesnapshots=true in "addons-071702"
	I0923 10:21:28.712730   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.712981   11849 mustload.go:65] Loading cluster: addons-071702
	I0923 10:21:28.714170   11849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:28.739013   11849 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:21:28.740442   11849 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:28.740466   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:21:28.740520   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.747437   11849 config.go:182] Loaded profile config "addons-071702": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:28.747685   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.747815   11849 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-071702"
	I0923 10:21:28.747864   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.748101   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.748286   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.748481   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.757125   11849 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 10:21:28.758617   11849 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 10:21:28.759906   11849 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 10:21:28.762037   11849 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:28.762060   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 10:21:28.762115   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.788162   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:21:28.790219   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:21:28.790865   11849 addons.go:234] Setting addon default-storageclass=true in "addons-071702"
	I0923 10:21:28.790910   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.791381   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:28.793557   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:21:28.794597   11849 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:21:28.794714   11849 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:21:28.795803   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:21:28.796258   11849 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:21:28.796271   11849 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:21:28.796334   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.796606   11849 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:28.796621   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:21:28.796671   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.797863   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:21:28.799535   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:21:28.801362   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:21:28.802519   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:21:28.803493   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:21:28.803513   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:21:28.803533   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.803571   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.803800   11849 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:21:28.805725   11849 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:21:28.805779   11849 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:28.805798   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:21:28.805847   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.806912   11849 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:21:28.806935   11849 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:21:28.806982   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.819858   11849 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:21:28.819928   11849 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:21:28.829547   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.832290   11849 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:21:28.832357   11849 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:21:28.832368   11849 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:21:28.832417   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.833348   11849 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:21:28.834514   11849 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:21:28.834532   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:21:28.834578   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.835323   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:28.835687   11849 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:28.836780   11849 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:21:28.837804   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:21:28.837819   11849 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:21:28.837859   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.838301   11849 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:28.839811   11849 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:21:28.839828   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:21:28.839867   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.857360   11849 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:21:28.858915   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.863200   11849 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:21:28.863218   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:21:28.863267   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.867023   11849 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:21:28.869825   11849 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:21:28.871280   11849 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:28.871306   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:21:28.871365   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.876887   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.881504   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.890863   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.893700   11849 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:28.893723   11849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:21:28.893773   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:28.895771   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.898847   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.899277   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.901760   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.908691   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.927920   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.931250   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:28.933830   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	W0923 10:21:28.954852   11849 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:21:28.954885   11849 retry.go:31] will retry after 253.04655ms: ssh: handshake failed: EOF
	I0923 10:21:29.077382   11849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:21:29.077457   11849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:21:29.262392   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:21:29.262473   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:21:29.267293   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:21:29.360252   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:29.360286   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:29.364626   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:21:29.371732   11849 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:21:29.371767   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:21:29.459316   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:29.460430   11849 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:21:29.460489   11849 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:21:29.471782   11849 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:21:29.471860   11849 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:21:29.473835   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:29.553333   11849 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:21:29.553420   11849 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:21:29.560017   11849 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:21:29.560098   11849 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:21:29.561111   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:29.575179   11849 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:21:29.575265   11849 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:21:29.662775   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:21:29.662804   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:21:29.759244   11849 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:21:29.759328   11849 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:21:29.873211   11849 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:21:29.873295   11849 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:21:29.955601   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:30.054418   11849 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:21:30.054511   11849 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:21:30.059457   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:21:30.059534   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:21:30.073658   11849 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:30.073735   11849 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:21:30.155595   11849 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:21:30.155690   11849 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:21:30.353336   11849 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:30.353363   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:21:30.460758   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:21:30.460792   11849 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:21:30.554058   11849 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.47652869s)
	I0923 10:21:30.554108   11849 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:21:30.555346   11849 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.477937133s)
	I0923 10:21:30.556176   11849 node_ready.go:35] waiting up to 6m0s for node "addons-071702" to be "Ready" ...
	I0923 10:21:30.557622   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:21:30.557684   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:21:30.558839   11849 node_ready.go:49] node "addons-071702" has status "Ready":"True"
	I0923 10:21:30.558883   11849 node_ready.go:38] duration metric: took 2.683674ms for node "addons-071702" to be "Ready" ...
	I0923 10:21:30.558903   11849 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:21:30.567022   11849 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:30.652510   11849 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:21:30.652541   11849 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:21:30.678405   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:30.762717   11849 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:21:30.762803   11849 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:21:30.955169   11849 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:30.955431   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:21:30.955406   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:31.063663   11849 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-071702" context rescaled to 1 replicas
	I0923 10:21:31.063878   11849 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:21:31.063917   11849 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:21:31.360454   11849 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:31.360482   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:21:31.366157   11849 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:21:31.366188   11849 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:21:31.559432   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:21:31.559462   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:21:31.755610   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:31.857903   11849 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:21:31.857935   11849 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:21:32.052917   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:32.171495   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:21:32.171527   11849 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:21:32.353168   11849 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:21:32.353199   11849 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:21:32.451741   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.184359224s)
	I0923 10:21:32.571088   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:21:32.571117   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:21:32.652058   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:32.772951   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.412598648s)
	I0923 10:21:32.854070   11849 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:32.854159   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:21:32.874402   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:21:32.874428   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:21:33.355277   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:33.575246   11849 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:33.575275   11849 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:21:33.952812   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:35.073258   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:35.856649   11849 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:21:35.856733   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:35.882615   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:36.673554   11849 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:21:37.062155   11849 addons.go:234] Setting addon gcp-auth=true in "addons-071702"
	I0923 10:21:37.062278   11849 host.go:66] Checking if "addons-071702" exists ...
	I0923 10:21:37.063048   11849 cli_runner.go:164] Run: docker container inspect addons-071702 --format={{.State.Status}}
	I0923 10:21:37.086184   11849 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:21:37.086229   11849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-071702
	I0923 10:21:37.101529   11849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/addons-071702/id_rsa Username:docker}
	I0923 10:21:37.154160   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:39.575994   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:40.468034   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.107519833s)
	I0923 10:21:40.468312   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.008906054s)
	I0923 10:21:40.468374   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.994483152s)
	I0923 10:21:40.468404   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.103608991s)
	I0923 10:21:40.468442   11849 addons.go:475] Verifying addon ingress=true in "addons-071702"
	I0923 10:21:40.468607   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.790108408s)
	I0923 10:21:40.468641   11849 addons.go:475] Verifying addon registry=true in "addons-071702"
	I0923 10:21:40.468815   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.713176342s)
	I0923 10:21:40.468462   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.907306639s)
	W0923 10:21:40.468864   11849 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:40.468481   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.512796463s)
	I0923 10:21:40.468882   11849 retry.go:31] will retry after 144.693801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:40.468935   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.41592831s)
	I0923 10:21:40.468731   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.513182182s)
	I0923 10:21:40.469036   11849 addons.go:475] Verifying addon metrics-server=true in "addons-071702"
	I0923 10:21:40.469131   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.113741125s)
	I0923 10:21:40.472246   11849 out.go:177] * Verifying ingress addon...
	I0923 10:21:40.472368   11849 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-071702 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:21:40.472419   11849 out.go:177] * Verifying registry addon...
	I0923 10:21:40.474710   11849 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:21:40.475852   11849 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0923 10:21:40.479129   11849 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 10:21:40.479618   11849 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:21:40.479640   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:40.480775   11849 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:21:40.480799   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:40.614323   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:40.979199   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:40.980590   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:41.479825   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:41.480346   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:41.576244   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:41.681724   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.728861681s)
	I0923 10:21:41.681763   11849 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-071702"
	I0923 10:21:41.681793   11849 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.595580743s)
	I0923 10:21:41.683960   11849 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:41.683966   11849 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:21:41.685228   11849 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:21:41.685847   11849 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:21:41.686409   11849 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:21:41.686422   11849 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:21:41.755094   11849 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:21:41.755181   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:41.768662   11849 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:21:41.768689   11849 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:21:41.857274   11849 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:41.857304   11849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:21:41.879789   11849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:41.979930   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:41.980257   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:42.256561   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:42.483317   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:42.484096   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:42.756367   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:42.857652   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.243256292s)
	I0923 10:21:42.979484   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:42.979672   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:43.175993   11849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.296149037s)
	I0923 10:21:43.177895   11849 addons.go:475] Verifying addon gcp-auth=true in "addons-071702"
	I0923 10:21:43.179435   11849 out.go:177] * Verifying gcp-auth addon...
	I0923 10:21:43.181813   11849 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:21:43.183813   11849 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:21:43.189797   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:43.478656   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:43.478841   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:43.689921   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:43.980375   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:43.980703   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:44.072682   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:44.189845   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:44.479422   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:44.479807   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:44.689357   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:44.979240   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:44.979288   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:45.189640   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:45.479203   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:45.479398   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:45.689707   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:45.978915   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:45.979127   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:46.286773   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:46.479365   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:46.479522   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:46.572560   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:46.689608   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:46.977921   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:46.979288   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:47.189236   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:47.480040   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:47.481721   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:47.690872   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:47.979480   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:47.980276   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:48.190811   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:48.479757   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:48.480856   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:48.573592   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:48.690447   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:48.979328   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:48.979646   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:49.189818   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:49.478231   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:49.478387   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:49.689776   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:49.978946   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:49.978984   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:50.190716   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:50.481026   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:50.481311   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:50.689651   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:50.978733   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:50.978977   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.072423   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:51.189688   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:51.478558   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:51.478607   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.690435   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:51.978804   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.978851   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.219448   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:52.479056   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.479163   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:52.689688   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:52.978822   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.978949   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:53.190347   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.479147   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.479401   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:53.572934   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:53.689055   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.979477   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.979869   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:54.189674   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.478786   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:54.479114   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:54.691050   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.978825   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:54.978907   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.188938   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.478883   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.479379   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:55.690342   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.979126   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:55.979213   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.072783   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:56.189946   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:56.478439   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:56.478480   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.718953   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:56.978832   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.978953   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:57.286662   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.478264   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:57.478887   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:57.689517   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.978984   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:57.979294   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:58.073187   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:58.190012   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.478939   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:58.479110   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:58.689803   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.978522   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:58.979007   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:59.189318   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.479083   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:59.479341   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:59.689902   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.979053   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:59.979267   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:00.189654   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.478476   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:00.478721   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:00.572699   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:00.689039   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.978382   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:00.978919   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:01.189197   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.478856   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:01.479227   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:01.690087   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.979626   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:01.979834   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:02.190387   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.479073   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:02.479141   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:02.573231   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:02.690033   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.979020   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:02.979798   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:03.190117   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.478848   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:03.479507   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:03.690407   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.978722   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:03.978855   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:04.189823   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.478626   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:04.478873   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:04.691531   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.979126   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:04.979356   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:05.073187   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:05.189922   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.479139   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:05.479365   11849 kapi.go:107] duration metric: took 25.003510418s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:22:05.689255   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.978832   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:06.189713   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.478712   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:06.689519   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.979031   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:07.073645   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:07.190112   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.487809   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:07.690011   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.979004   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:08.190319   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.478859   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:08.689463   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.979018   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:09.189280   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.479040   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:09.572441   11849 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:09.689606   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.979374   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:10.189309   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.479017   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:10.690132   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.979292   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:11.073349   11849 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.073374   11849 pod_ready.go:82] duration metric: took 40.506164292s for pod "coredns-7c65d6cfc9-hd4pw" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.073386   11849 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szwtj" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.076765   11849 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-szwtj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-szwtj" not found
	I0923 10:22:11.076804   11849 pod_ready.go:82] duration metric: took 3.408582ms for pod "coredns-7c65d6cfc9-szwtj" in "kube-system" namespace to be "Ready" ...
	E0923 10:22:11.076817   11849 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-szwtj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-szwtj" not found
	I0923 10:22:11.076827   11849 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.080923   11849 pod_ready.go:93] pod "etcd-addons-071702" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.080945   11849 pod_ready.go:82] duration metric: took 4.109957ms for pod "etcd-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.080956   11849 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.085017   11849 pod_ready.go:93] pod "kube-apiserver-addons-071702" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.085039   11849 pod_ready.go:82] duration metric: took 4.074001ms for pod "kube-apiserver-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.085051   11849 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.088995   11849 pod_ready.go:93] pod "kube-controller-manager-addons-071702" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.089016   11849 pod_ready.go:82] duration metric: took 3.956391ms for pod "kube-controller-manager-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.089026   11849 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gsgwd" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.189665   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.270601   11849 pod_ready.go:93] pod "kube-proxy-gsgwd" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.270626   11849 pod_ready.go:82] duration metric: took 181.591464ms for pod "kube-proxy-gsgwd" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.270639   11849 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.478791   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:11.670599   11849 pod_ready.go:93] pod "kube-scheduler-addons-071702" in "kube-system" namespace has status "Ready":"True"
	I0923 10:22:11.670623   11849 pod_ready.go:82] duration metric: took 399.975445ms for pod "kube-scheduler-addons-071702" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:11.670633   11849 pod_ready.go:39] duration metric: took 41.11170856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:22:11.670656   11849 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:22:11.670731   11849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:22:11.684118   11849 api_server.go:72] duration metric: took 42.976532631s to wait for apiserver process to appear ...
	I0923 10:22:11.684145   11849 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:22:11.684168   11849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:22:11.687834   11849 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:22:11.688628   11849 api_server.go:141] control plane version: v1.31.1
	I0923 10:22:11.688653   11849 api_server.go:131] duration metric: took 4.499932ms to wait for apiserver health ...
	I0923 10:22:11.688661   11849 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:22:11.689659   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.874566   11849 system_pods.go:59] 17 kube-system pods found
	I0923 10:22:11.874593   11849 system_pods.go:61] "coredns-7c65d6cfc9-hd4pw" [53fdec36-508d-40c2-9b22-80f6afc1976b] Running
	I0923 10:22:11.874602   11849 system_pods.go:61] "csi-hostpath-attacher-0" [da0cebec-a9ae-4226-aa37-c12c18ed4683] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:22:11.874609   11849 system_pods.go:61] "csi-hostpath-resizer-0" [58d839a9-8c23-473d-bcea-6bb514783b23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:22:11.874616   11849 system_pods.go:61] "csi-hostpathplugin-nm6zh" [2897db1d-3abb-4194-a021-6febc9c88430] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:22:11.874626   11849 system_pods.go:61] "etcd-addons-071702" [e3b672b2-f950-4103-bb7b-5852cbd23171] Running
	I0923 10:22:11.874630   11849 system_pods.go:61] "kube-apiserver-addons-071702" [edae2c11-e976-40e5-995a-349287adf5af] Running
	I0923 10:22:11.874637   11849 system_pods.go:61] "kube-controller-manager-addons-071702" [74585c60-486f-49ca-b8d1-3d88fafcfd85] Running
	I0923 10:22:11.874642   11849 system_pods.go:61] "kube-ingress-dns-minikube" [7a06c605-470c-4010-9782-8ddb472a082d] Running
	I0923 10:22:11.874645   11849 system_pods.go:61] "kube-proxy-gsgwd" [da9f33ce-241c-4457-87e2-b90aaf06b0ce] Running
	I0923 10:22:11.874649   11849 system_pods.go:61] "kube-scheduler-addons-071702" [d7dfada2-ae46-4f5f-bd80-4bc44bf4aa9c] Running
	I0923 10:22:11.874654   11849 system_pods.go:61] "metrics-server-84c5f94fbc-4l9jp" [04190cb8-ca8f-459e-bb88-24aa8a774d1d] Running
	I0923 10:22:11.874657   11849 system_pods.go:61] "nvidia-device-plugin-daemonset-kxghd" [959908a1-ff48-4627-ad29-d0d6134865d7] Running
	I0923 10:22:11.874662   11849 system_pods.go:61] "registry-66c9cd494c-sswjh" [42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe] Running
	I0923 10:22:11.874665   11849 system_pods.go:61] "registry-proxy-w6x4v" [8964449b-425a-4614-aa7b-d6cc98a185c7] Running
	I0923 10:22:11.874696   11849 system_pods.go:61] "snapshot-controller-56fcc65765-cvdcs" [306aebc0-cf33-45f7-8669-d354b0ae713c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:22:11.874708   11849 system_pods.go:61] "snapshot-controller-56fcc65765-zfsbk" [6f50a1f3-ec1f-4d33-8fa9-9379bfc44a79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:22:11.874718   11849 system_pods.go:61] "storage-provisioner" [b9297771-204e-479f-9aa4-da05fb25f230] Running
	I0923 10:22:11.874726   11849 system_pods.go:74] duration metric: took 186.058115ms to wait for pod list to return data ...
	I0923 10:22:11.874737   11849 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:22:11.979093   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:12.069751   11849 default_sa.go:45] found service account: "default"
	I0923 10:22:12.069776   11849 default_sa.go:55] duration metric: took 195.032177ms for default service account to be created ...
	I0923 10:22:12.069785   11849 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:22:12.189931   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:12.277344   11849 system_pods.go:86] 17 kube-system pods found
	I0923 10:22:12.277379   11849 system_pods.go:89] "coredns-7c65d6cfc9-hd4pw" [53fdec36-508d-40c2-9b22-80f6afc1976b] Running
	I0923 10:22:12.277391   11849 system_pods.go:89] "csi-hostpath-attacher-0" [da0cebec-a9ae-4226-aa37-c12c18ed4683] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:22:12.277401   11849 system_pods.go:89] "csi-hostpath-resizer-0" [58d839a9-8c23-473d-bcea-6bb514783b23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:22:12.277412   11849 system_pods.go:89] "csi-hostpathplugin-nm6zh" [2897db1d-3abb-4194-a021-6febc9c88430] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:22:12.277420   11849 system_pods.go:89] "etcd-addons-071702" [e3b672b2-f950-4103-bb7b-5852cbd23171] Running
	I0923 10:22:12.277426   11849 system_pods.go:89] "kube-apiserver-addons-071702" [edae2c11-e976-40e5-995a-349287adf5af] Running
	I0923 10:22:12.277432   11849 system_pods.go:89] "kube-controller-manager-addons-071702" [74585c60-486f-49ca-b8d1-3d88fafcfd85] Running
	I0923 10:22:12.277437   11849 system_pods.go:89] "kube-ingress-dns-minikube" [7a06c605-470c-4010-9782-8ddb472a082d] Running
	I0923 10:22:12.277444   11849 system_pods.go:89] "kube-proxy-gsgwd" [da9f33ce-241c-4457-87e2-b90aaf06b0ce] Running
	I0923 10:22:12.277450   11849 system_pods.go:89] "kube-scheduler-addons-071702" [d7dfada2-ae46-4f5f-bd80-4bc44bf4aa9c] Running
	I0923 10:22:12.277459   11849 system_pods.go:89] "metrics-server-84c5f94fbc-4l9jp" [04190cb8-ca8f-459e-bb88-24aa8a774d1d] Running
	I0923 10:22:12.277465   11849 system_pods.go:89] "nvidia-device-plugin-daemonset-kxghd" [959908a1-ff48-4627-ad29-d0d6134865d7] Running
	I0923 10:22:12.277473   11849 system_pods.go:89] "registry-66c9cd494c-sswjh" [42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe] Running
	I0923 10:22:12.277479   11849 system_pods.go:89] "registry-proxy-w6x4v" [8964449b-425a-4614-aa7b-d6cc98a185c7] Running
	I0923 10:22:12.277489   11849 system_pods.go:89] "snapshot-controller-56fcc65765-cvdcs" [306aebc0-cf33-45f7-8669-d354b0ae713c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:22:12.277501   11849 system_pods.go:89] "snapshot-controller-56fcc65765-zfsbk" [6f50a1f3-ec1f-4d33-8fa9-9379bfc44a79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:22:12.277507   11849 system_pods.go:89] "storage-provisioner" [b9297771-204e-479f-9aa4-da05fb25f230] Running
	I0923 10:22:12.277519   11849 system_pods.go:126] duration metric: took 207.727592ms to wait for k8s-apps to be running ...
	I0923 10:22:12.277531   11849 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:22:12.277583   11849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:22:12.288993   11849 system_svc.go:56] duration metric: took 11.45412ms WaitForService to wait for kubelet
	I0923 10:22:12.289022   11849 kubeadm.go:582] duration metric: took 43.581437962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:22:12.289044   11849 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:22:12.471580   11849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:22:12.471607   11849 node_conditions.go:123] node cpu capacity is 8
	I0923 10:22:12.471619   11849 node_conditions.go:105] duration metric: took 182.569018ms to run NodePressure ...
	I0923 10:22:12.471638   11849 start.go:241] waiting for startup goroutines ...
	I0923 10:22:12.478875   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:12.690306   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:12.978802   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:13.289010   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:13.479885   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:13.787874   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:13.979912   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:14.191554   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:14.480386   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:14.689580   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:14.978898   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:15.189383   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:15.478434   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:15.689383   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:15.980056   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:16.190665   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:16.479545   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:16.689763   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:16.979407   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:17.189639   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:17.479161   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:17.689741   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:17.980050   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:18.189269   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:18.478956   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:18.690933   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:18.979478   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:19.189524   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:19.479516   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:19.689484   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:19.978077   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:20.189681   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:20.479016   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:20.690194   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:20.979388   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:21.189868   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:21.478931   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:21.689122   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:21.978622   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:22.190551   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:22.514292   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:22.688601   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:22.978441   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:23.189955   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:23.478782   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:23.689997   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:23.979239   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:24.190359   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:24.479756   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:24.689822   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:24.979679   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:25.190281   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:25.479518   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:25.689834   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:25.979885   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:26.255176   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:26.478905   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:26.690455   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:26.978047   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:27.190064   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:27.479332   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:27.689909   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:27.979586   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:28.189501   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:28.479441   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:28.688935   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:28.978858   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:29.189917   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:29.479417   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:29.689501   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:29.978967   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:30.189251   11849 kapi.go:107] duration metric: took 48.50339918s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:22:30.478541   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:30.979203   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:31.478960   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:31.977793   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:32.479138   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:32.978318   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:33.478355   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:33.978184   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:34.477850   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:34.979371   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:35.479043   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:35.979198   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.478911   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.978975   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.478550   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.978825   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.478222   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.978445   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.478449   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.978605   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.478382   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.978473   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.478525   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.978946   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.480557   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.978997   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.479821   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.979289   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.479331   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.979919   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.478836   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.979489   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.478958   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.978470   11849 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.481100   11849 kapi.go:107] duration metric: took 1m7.006388582s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:23:05.685518   11849 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:23:05.685538   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.185070   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.685190   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.185166   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.685462   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.185220   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.684831   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.185180   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.684762   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.185712   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.684770   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.184403   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.684934   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.184556   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.685720   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.185154   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.685207   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.184764   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.684381   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.185633   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.685035   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.184662   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.685383   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.185723   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.685623   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.185780   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.684775   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.185128   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.685527   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.185690   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.685196   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.185317   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.685275   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.185260   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.684957   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.184699   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.684507   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.185638   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.684826   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.184811   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.685388   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.185266   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.685105   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.185160   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.685254   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.185311   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.685503   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.184877   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.685230   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.184854   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.684629   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.186008   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.684858   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.184525   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.685611   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.185718   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.684680   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.186050   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.684712   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.184958   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.685491   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.185265   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.685109   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.184872   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.684677   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.184727   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.684493   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.184822   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.684673   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.185787   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.684885   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.185102   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.684762   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.185420   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.685556   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.185884   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.685465   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.185126   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.685090   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.185203   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.685335   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.185384   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.685826   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.184819   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.684856   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.185425   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.685456   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.185696   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.685777   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.184922   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.684928   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.185002   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.685003   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.184980   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.685170   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.185279   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.685222   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.185249   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.685445   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.185974   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.685642   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.185021   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.684588   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.185662   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.684982   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.185082   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.684773   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.185178   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.684899   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.185005   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.684855   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.184953   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.684718   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.184413   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.685397   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.185502   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.685386   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.185084   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.684951   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.185115   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.685337   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.185583   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.685564   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.185713   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.684775   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.184788   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.685473   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.185387   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.685720   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.185083   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.685746   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.185513   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.686018   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.185422   11849 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.698458   11849 kapi.go:107] duration metric: took 2m29.516642295s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:12.714404   11849 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-071702 cluster.
	I0923 10:24:12.716422   11849 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:12.718018   11849 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:12.719199   11849 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, volcano, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 10:24:12.720228   11849 addons.go:510] duration metric: took 2m44.012571718s for enable addons: enabled=[ingress-dns storage-provisioner volcano nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 10:24:12.720264   11849 start.go:246] waiting for cluster config update ...
	I0923 10:24:12.720290   11849 start.go:255] writing updated cluster config ...
	I0923 10:24:12.720553   11849 ssh_runner.go:195] Run: rm -f paused
	I0923 10:24:12.800417   11849 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:24:12.802276   11849 out.go:177] * Done! kubectl is now configured to use "addons-071702" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 10:33:45 addons-071702 cri-dockerd[1611]: time="2024-09-23T10:33:45Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"task-pv-pod-restore_default\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 23 10:33:45 addons-071702 dockerd[1345]: time="2024-09-23T10:33:45.333522605Z" level=info msg="ignoring event" container=df8c881ec8ffc48561b54d365867856f8920bb054556b2db735213e6e333a0d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.373184251Z" level=info msg="ignoring event" container=55fce2e386d5e14f3235735c779acf3136bb7b7dddcfe4ea2222960012ad2aa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.464021709Z" level=info msg="ignoring event" container=98ad8767bad5ee2a83b6c6ff9db0f79770b716871efdee395a289b952715c583 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.467896699Z" level=info msg="ignoring event" container=49b0944ed0d4770ecfd1cc69a7283c91ef681cbca216ddbea6c3f33a32315c53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.468681386Z" level=info msg="ignoring event" container=143912aab3482e05f9b13b298dd2cfb4692ebc9c3bcba95420d0afff9445122c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.469998250Z" level=info msg="ignoring event" container=8acb1a3afff633c89788aa338731a3a82ae296593e5d006192f1d88b4aaebf42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.470960528Z" level=info msg="ignoring event" container=832494167f1e94451657b1cbcdc6adbb063e7462356ec4ecb93a56e618a8e814 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.557232664Z" level=info msg="ignoring event" container=0861e1919fc9d7a4ba964afe657741b968c4de1f35a7ae4ba7d29bee56bd978e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.558993047Z" level=info msg="ignoring event" container=e8ea65fe812686e5c472b5b301de88e49cc936d39de6d57f4ce81c98d0db5aa6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.757911401Z" level=info msg="ignoring event" container=4ce9528e1ba0a4f7141362f866bf96dae2e18793585b13596d9f0235c146cf5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.783743907Z" level=info msg="ignoring event" container=1d61a217146dd96ce59b1f0b283c78dbf35774a303fe3e1b52e2c584f98f5342 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:47 addons-071702 dockerd[1345]: time="2024-09-23T10:33:47.804218123Z" level=info msg="ignoring event" container=77c68e23f2bc415e30fe4a4495130df868caffb1afa2c2b1d03e412dbd1bb3aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:49 addons-071702 dockerd[1345]: time="2024-09-23T10:33:49.407282934Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2a0776a9f89a6d1b traceID=c578c5a26e36cbdf336e6ad3df3852b7
	Sep 23 10:33:49 addons-071702 dockerd[1345]: time="2024-09-23T10:33:49.409495161Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2a0776a9f89a6d1b traceID=c578c5a26e36cbdf336e6ad3df3852b7
	Sep 23 10:33:53 addons-071702 dockerd[1345]: time="2024-09-23T10:33:53.665531614Z" level=info msg="ignoring event" container=421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:53 addons-071702 dockerd[1345]: time="2024-09-23T10:33:53.673177747Z" level=info msg="ignoring event" container=cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:53 addons-071702 dockerd[1345]: time="2024-09-23T10:33:53.812162149Z" level=info msg="ignoring event" container=0623cb23025a8274a7d1fac5b18807d6244bddf0a6325c6324e3be9709571196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:33:53 addons-071702 dockerd[1345]: time="2024-09-23T10:33:53.841744605Z" level=info msg="ignoring event" container=5d092d0bc4cd5db062e290b3793dfe813e3a8d6355a3f0b31e53020c267696a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:05 addons-071702 dockerd[1345]: time="2024-09-23T10:34:05.130509604Z" level=info msg="ignoring event" container=535067b4bd37dd82921e916b23a53df83ac3dc734ad2b703cc20af8febe0dea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:05 addons-071702 dockerd[1345]: time="2024-09-23T10:34:05.655129553Z" level=info msg="ignoring event" container=5d114050ac1c887356b180e39c42b27a6b7eec948223cab236e11378c8ec2b63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:05 addons-071702 dockerd[1345]: time="2024-09-23T10:34:05.663697802Z" level=info msg="ignoring event" container=17261e380369bd5ff191af49686648f0b3c6f981535d76cd2f9f0d864285cb08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:05 addons-071702 dockerd[1345]: time="2024-09-23T10:34:05.795022012Z" level=info msg="ignoring event" container=496c7844e167ff0afc182e22f66317342db55e1269ec5830f86cdaf4fca57834 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:05 addons-071702 cri-dockerd[1611]: time="2024-09-23T10:34:05Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-w6x4v_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 23 10:34:05 addons-071702 dockerd[1345]: time="2024-09-23T10:34:05.872207774Z" level=info msg="ignoring event" container=9dc3f70eb563e7df2aa48777163281b82fa975245e704a41063c3163bf31c7be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d033fc2cf3c6       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  27 seconds ago      Running             hello-world-app           0                   f4166ecdf6a4f       hello-world-app-55bf9c44b4-87prz
	a4dec6063579d       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                38 seconds ago      Running             nginx                     0                   62b6d1fb7e0b4       nginx
	2186582a836b5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   7572b9e72f093       gcp-auth-89d5ffd79-5njc2
	7f21c32517212       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   9590f220fe1ee       ingress-nginx-admission-patch-5fvqk
	6aa8dc6312669       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   4b8559e455e03       ingress-nginx-admission-create-5xvgp
	17261e380369b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   9dc3f70eb563e       registry-proxy-w6x4v
	ca288cd1b4a28       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   834c04c24ac98       local-path-provisioner-86d989889c-5q6hg
	283a48bd8ff1e       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   e93d1d20f2606       storage-provisioner
	20ed0161c4e82       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   ead993cb3a469       coredns-7c65d6cfc9-hd4pw
	20bcbfe91581d       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   3394ea776aa5b       kube-proxy-gsgwd
	9e05497c00f23       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   238abf4e06771       kube-apiserver-addons-071702
	cb22d2aad924c       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   1a00876a961c6       kube-controller-manager-addons-071702
	9305922fd2cd3       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   7a61731b82779       kube-scheduler-addons-071702
	dd7e3397fd9bd       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   2cdd1c6d4772a       etcd-addons-071702
	
	
	==> coredns [20ed0161c4e8] <==
	[INFO] 10.244.0.21:37527 - 27265 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004966417s
	[INFO] 10.244.0.21:34780 - 30491 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005072356s
	[INFO] 10.244.0.21:51605 - 55134 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005505481s
	[INFO] 10.244.0.21:59388 - 5211 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005587385s
	[INFO] 10.244.0.21:59720 - 43205 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004968399s
	[INFO] 10.244.0.21:51808 - 49668 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005323956s
	[INFO] 10.244.0.21:54372 - 11455 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005069541s
	[INFO] 10.244.0.21:34780 - 6185 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004714439s
	[INFO] 10.244.0.21:51605 - 26189 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005350006s
	[INFO] 10.244.0.21:37527 - 31143 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005423282s
	[INFO] 10.244.0.21:34780 - 56585 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085415s
	[INFO] 10.244.0.21:59720 - 31265 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005331724s
	[INFO] 10.244.0.21:51978 - 48738 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005650998s
	[INFO] 10.244.0.21:54372 - 35793 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005350989s
	[INFO] 10.244.0.21:51605 - 22590 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080852s
	[INFO] 10.244.0.21:59388 - 32157 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004988657s
	[INFO] 10.244.0.21:43073 - 39520 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005812606s
	[INFO] 10.244.0.21:51808 - 45954 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005410385s
	[INFO] 10.244.0.21:51978 - 52911 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007709s
	[INFO] 10.244.0.21:59388 - 44770 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000632s
	[INFO] 10.244.0.21:43073 - 28370 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074533s
	[INFO] 10.244.0.21:37527 - 50849 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100353s
	[INFO] 10.244.0.21:54372 - 50164 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077962s
	[INFO] 10.244.0.21:51808 - 56123 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106503s
	[INFO] 10.244.0.21:59720 - 8447 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006348s
	
	
	==> describe nodes <==
	Name:               addons-071702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-071702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-071702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_21_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-071702
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-071702
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:33:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:33:57 +0000   Mon, 23 Sep 2024 10:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:33:57 +0000   Mon, 23 Sep 2024 10:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:33:57 +0000   Mon, 23 Sep 2024 10:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:33:57 +0000   Mon, 23 Sep 2024 10:21:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-071702
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b1783ce4ab45188955f39aa1dcf347
	  System UUID:                0b8cf6c3-b2c2-417d-a323-acbca8a7fc1c
	  Boot ID:                    cfa98cdb-4c43-498d-8dd6-a23453a788b2
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-87prz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  gcp-auth                    gcp-auth-89d5ffd79-5njc2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-hd4pw                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-071702                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-071702               250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-071702      200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gsgwd                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-071702               100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-5q6hg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-071702 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-071702 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-071702 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-071702 event: Registered Node addons-071702 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 7d 3f 8f d5 f2 08 06
	[  +2.025426] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a da b7 cd aa 75 08 06
	[  +5.411075] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 c7 6b a5 4c 94 08 06
	[  +0.286718] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 13 1b ac 27 38 08 06
	[  +0.541061] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 10 20 02 d8 8f 08 06
	[ +18.624270] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 d7 61 9d 2d 0a 08 06
	[  +1.076226] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 46 0b eb 24 09 08 06
	[Sep23 10:23] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa b1 04 43 d3 94 08 06
	[  +0.038260] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 ba 0a 0f 6f 42 08 06
	[Sep23 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 42 63 c4 f5 bd 08 06
	[  +0.000420] IPv4: martian source 10.244.0.25 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e f6 8a 3b 20 b6 08 06
	[Sep23 10:33] IPv4: martian source 10.244.0.34 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 d7 61 9d 2d 0a 08 06
	[  +0.407389] IPv4: martian source 10.244.0.21 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e f6 8a 3b 20 b6 08 06
	
	
	==> etcd [dd7e3397fd9b] <==
	{"level":"info","ts":"2024-09-23T10:21:19.470437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T10:21:19.470461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:19.470476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:19.470496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:19.470509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:19.471418Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-071702 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:21:19.471448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:21:19.471477Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:21:19.471653Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:19.471802Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:21:19.471883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:21:19.472321Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:19.472404Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:19.472426Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:19.472654Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:21:19.472663Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:21:19.473806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T10:21:19.473806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-23T10:21:31.052775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.399399ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032086761132362 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" value_size:3722 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T10:21:31.052948Z","caller":"traceutil/trace.go:171","msg":"trace[2009601065] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"199.394436ms","start":"2024-09-23T10:21:30.853536Z","end":"2024-09-23T10:21:31.052930Z","steps":["trace[2009601065] 'process raft request'  (duration: 199.329553ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:21:31.053009Z","caller":"traceutil/trace.go:171","msg":"trace[1502259280] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"301.743716ms","start":"2024-09-23T10:21:30.751239Z","end":"2024-09-23T10:21:31.052982Z","steps":["trace[1502259280] 'process raft request'  (duration: 99.62689ms)","trace[1502259280] 'compare'  (duration: 201.084055ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:21:31.053113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:21:30.751220Z","time spent":"301.835015ms","remote":"127.0.0.1:36346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" value_size:3722 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" > >"}
	{"level":"info","ts":"2024-09-23T10:31:19.674928Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1846}
	{"level":"info","ts":"2024-09-23T10:31:19.698713Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1846,"took":"23.227145ms","hash":1388776187,"current-db-size-bytes":9011200,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4907008,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-23T10:31:19.698761Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1388776187,"revision":1846,"compact-revision":-1}
	
	
	==> gcp-auth [2186582a836b] <==
	2024/09/23 10:24:51 Ready to write response ...
	2024/09/23 10:24:51 Ready to marshal response ...
	2024/09/23 10:24:51 Ready to write response ...
	2024/09/23 10:32:54 Ready to marshal response ...
	2024/09/23 10:32:54 Ready to write response ...
	2024/09/23 10:32:54 Ready to marshal response ...
	2024/09/23 10:32:54 Ready to write response ...
	2024/09/23 10:32:55 Ready to marshal response ...
	2024/09/23 10:32:55 Ready to write response ...
	2024/09/23 10:33:03 Ready to marshal response ...
	2024/09/23 10:33:03 Ready to write response ...
	2024/09/23 10:33:05 Ready to marshal response ...
	2024/09/23 10:33:05 Ready to write response ...
	2024/09/23 10:33:11 Ready to marshal response ...
	2024/09/23 10:33:11 Ready to write response ...
	2024/09/23 10:33:11 Ready to marshal response ...
	2024/09/23 10:33:11 Ready to write response ...
	2024/09/23 10:33:11 Ready to marshal response ...
	2024/09/23 10:33:11 Ready to write response ...
	2024/09/23 10:33:25 Ready to marshal response ...
	2024/09/23 10:33:25 Ready to write response ...
	2024/09/23 10:33:36 Ready to marshal response ...
	2024/09/23 10:33:36 Ready to write response ...
	2024/09/23 10:33:37 Ready to marshal response ...
	2024/09/23 10:33:37 Ready to write response ...
	
	
	==> kernel <==
	 10:34:06 up 16 min,  0 users,  load average: 0.69, 0.44, 0.30
	Linux addons-071702 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9e05497c00f2] <==
	W0923 10:24:43.267833       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 10:24:43.274570       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 10:24:43.474842       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 10:24:43.766929       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 10:24:44.098154       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 10:33:03.257309       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:33:11.608625       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.228.81"}
	I0923 10:33:20.349575       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:21.466076       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:25.789109       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:25.969294       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.226.140"}
	I0923 10:33:37.472831       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.197.144"}
	I0923 10:33:53.515901       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:53.515955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:53.528086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:53.528138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:53.535323       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:53.535364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:53.540878       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:53.540932       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:53.568808       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:53.568845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:33:54.535568       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:33:54.569327       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 10:33:54.664783       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [cb22d2aad924] <==
	E0923 10:33:55.574909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:33:55.924421       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:55.924467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:33:56.037978       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:56.038015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:33:57.387641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-071702"
	W0923 10:33:57.473042       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:57.473088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:33:58.028917       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 10:33:58.028946       1 shared_informer.go:320] Caches are synced for resource quota
	W0923 10:33:58.248543       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:58.248582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:33:58.459236       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 10:33:58.459275       1 shared_informer.go:320] Caches are synced for garbage collector
	W0923 10:33:59.035029       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:59.035072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:01.672564       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:01.672605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:03.188866       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:03.188909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:05.005340       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:05.005382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:05.569373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.518µs"
	W0923 10:34:06.141056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:06.141096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [20bcbfe91581] <==
	I0923 10:21:30.663077       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:21:31.164609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:21:31.164683       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:21:31.753356       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:21:31.753422       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:21:31.759564       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:21:31.760036       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:21:31.760057       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:21:31.761767       1 config.go:199] "Starting service config controller"
	I0923 10:21:31.761795       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:21:31.761834       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:21:31.761848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:21:31.762407       1 config.go:328] "Starting node config controller"
	I0923 10:21:31.762416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:21:31.862027       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:21:31.862093       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:21:31.866339       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9305922fd2cd] <==
	E0923 10:21:20.854119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0923 10:21:20.854121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:20.854168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:21:20.854185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:20.854214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:21:20.854232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:20.854263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:20.854290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:20.853901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0923 10:21:20.854265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:21:20.854351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:21:20.854388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:20.854309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:20.854427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:21.658399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:21:21.658436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:21.660295       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:21:21.660323       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:21:21.672382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:21:21.672410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:21.701023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:21.701059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:21.705247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:21.705279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 10:21:24.475376       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.589026    2456 scope.go:117] "RemoveContainer" containerID="cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.603241    2456 scope.go:117] "RemoveContainer" containerID="cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: E0923 10:33:54.603941    2456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1" containerID="cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.603971    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1"} err="failed to get container status \"cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1\": rpc error: code = Unknown desc = Error response from daemon: No such container: cc06f4a274cf3542c8d279f74505145c961b003f0e22f454037d08bae50e18f1"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.603991    2456 scope.go:117] "RemoveContainer" containerID="421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.614440    2456 scope.go:117] "RemoveContainer" containerID="421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: E0923 10:33:54.615018    2456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe" containerID="421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe"
	Sep 23 10:33:54 addons-071702 kubelet[2456]: I0923 10:33:54.615054    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe"} err="failed to get container status \"421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe\": rpc error: code = Unknown desc = Error response from daemon: No such container: 421c2c6c30227975648101ff3ba3e648f0c524f1c4e91dcd43f5ef08a42088fe"
	Sep 23 10:33:55 addons-071702 kubelet[2456]: I0923 10:33:55.268031    2456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306aebc0-cf33-45f7-8669-d354b0ae713c" path="/var/lib/kubelet/pods/306aebc0-cf33-45f7-8669-d354b0ae713c/volumes"
	Sep 23 10:33:55 addons-071702 kubelet[2456]: I0923 10:33:55.268529    2456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f50a1f3-ec1f-4d33-8fa9-9379bfc44a79" path="/var/lib/kubelet/pods/6f50a1f3-ec1f-4d33-8fa9-9379bfc44a79/volumes"
	Sep 23 10:34:00 addons-071702 kubelet[2456]: E0923 10:34:00.263113    2456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="56edf1cc-2b21-496a-9452-47198379ea3f"
	Sep 23 10:34:01 addons-071702 kubelet[2456]: E0923 10:34:01.263186    2456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="6b3b199e-5f82-4373-9012-03e4aeba41a7"
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.321838    2456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6b3b199e-5f82-4373-9012-03e4aeba41a7-gcp-creds\") pod \"6b3b199e-5f82-4373-9012-03e4aeba41a7\" (UID: \"6b3b199e-5f82-4373-9012-03e4aeba41a7\") "
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.321890    2456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbx2p\" (UniqueName: \"kubernetes.io/projected/6b3b199e-5f82-4373-9012-03e4aeba41a7-kube-api-access-bbx2p\") pod \"6b3b199e-5f82-4373-9012-03e4aeba41a7\" (UID: \"6b3b199e-5f82-4373-9012-03e4aeba41a7\") "
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.321969    2456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3b199e-5f82-4373-9012-03e4aeba41a7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6b3b199e-5f82-4373-9012-03e4aeba41a7" (UID: "6b3b199e-5f82-4373-9012-03e4aeba41a7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.324243    2456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3b199e-5f82-4373-9012-03e4aeba41a7-kube-api-access-bbx2p" (OuterVolumeSpecName: "kube-api-access-bbx2p") pod "6b3b199e-5f82-4373-9012-03e4aeba41a7" (UID: "6b3b199e-5f82-4373-9012-03e4aeba41a7"). InnerVolumeSpecName "kube-api-access-bbx2p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.422087    2456 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6b3b199e-5f82-4373-9012-03e4aeba41a7-gcp-creds\") on node \"addons-071702\" DevicePath \"\""
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.422117    2456 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bbx2p\" (UniqueName: \"kubernetes.io/projected/6b3b199e-5f82-4373-9012-03e4aeba41a7-kube-api-access-bbx2p\") on node \"addons-071702\" DevicePath \"\""
	Sep 23 10:34:05 addons-071702 kubelet[2456]: I0923 10:34:05.873229    2456 scope.go:117] "RemoveContainer" containerID="5d114050ac1c887356b180e39c42b27a6b7eec948223cab236e11378c8ec2b63"
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.025337    2456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p2b4\" (UniqueName: \"kubernetes.io/projected/8964449b-425a-4614-aa7b-d6cc98a185c7-kube-api-access-9p2b4\") pod \"8964449b-425a-4614-aa7b-d6cc98a185c7\" (UID: \"8964449b-425a-4614-aa7b-d6cc98a185c7\") "
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.025389    2456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wntlf\" (UniqueName: \"kubernetes.io/projected/42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe-kube-api-access-wntlf\") pod \"42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe\" (UID: \"42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe\") "
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.027227    2456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8964449b-425a-4614-aa7b-d6cc98a185c7-kube-api-access-9p2b4" (OuterVolumeSpecName: "kube-api-access-9p2b4") pod "8964449b-425a-4614-aa7b-d6cc98a185c7" (UID: "8964449b-425a-4614-aa7b-d6cc98a185c7"). InnerVolumeSpecName "kube-api-access-9p2b4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.027300    2456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe-kube-api-access-wntlf" (OuterVolumeSpecName: "kube-api-access-wntlf") pod "42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe" (UID: "42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe"). InnerVolumeSpecName "kube-api-access-wntlf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.125614    2456 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9p2b4\" (UniqueName: \"kubernetes.io/projected/8964449b-425a-4614-aa7b-d6cc98a185c7-kube-api-access-9p2b4\") on node \"addons-071702\" DevicePath \"\""
	Sep 23 10:34:06 addons-071702 kubelet[2456]: I0923 10:34:06.125654    2456 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wntlf\" (UniqueName: \"kubernetes.io/projected/42afc9e2-bf4a-4c0a-9db9-8f58e617fcbe-kube-api-access-wntlf\") on node \"addons-071702\" DevicePath \"\""
	
	
	==> storage-provisioner [283a48bd8ff1] <==
	I0923 10:21:36.055606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:21:36.152850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:21:36.152906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:21:36.162733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:21:36.164081       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-071702_3a8d257a-89b6-4281-b346-48781c7e48dc!
	I0923 10:21:36.164921       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01b33793-dc9e-4264-879d-944e72e95cff", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-071702_3a8d257a-89b6-4281-b346-48781c7e48dc became leader
	I0923 10:21:36.269929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-071702_3a8d257a-89b6-4281-b346-48781c7e48dc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-071702 -n addons-071702
helpers_test.go:261: (dbg) Run:  kubectl --context addons-071702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-071702 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-071702 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-071702/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:24:51 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7j7xd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7j7xd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-071702
	  Normal   Pulling    7m48s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m35s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.43s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.91
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 4.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.18
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.73
22 TestOffline 48.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 207.47
29 TestAddons/serial/Volcano 38.7
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.24
35 TestAddons/parallel/InspektorGadget 10.58
36 TestAddons/parallel/MetricsServer 5.54
38 TestAddons/parallel/CSI 59.95
39 TestAddons/parallel/Headlamp 18.21
40 TestAddons/parallel/CloudSpanner 5.65
41 TestAddons/parallel/LocalPath 10.02
42 TestAddons/parallel/NvidiaDevicePlugin 5.39
43 TestAddons/parallel/Yakd 11.58
44 TestAddons/StoppedEnableDisable 10.94
45 TestCertOptions 29.91
46 TestCertExpiration 230.62
47 TestDockerFlags 23.84
48 TestForceSystemdFlag 32.9
49 TestForceSystemdEnv 27.44
51 TestKVMDriverInstallOrUpdate 5.47
55 TestErrorSpam/setup 21.05
56 TestErrorSpam/start 0.54
57 TestErrorSpam/status 0.83
58 TestErrorSpam/pause 1.12
59 TestErrorSpam/unpause 1.39
60 TestErrorSpam/stop 10.8
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 33.85
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.53
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.35
72 TestFunctional/serial/CacheCmd/cache/add_local 1.29
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.29
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 40.51
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.93
83 TestFunctional/serial/LogsFileCmd 0.95
84 TestFunctional/serial/InvalidService 4.1
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 10.31
88 TestFunctional/parallel/DryRun 0.34
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 0.91
94 TestFunctional/parallel/ServiceCmdConnect 11.48
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 30.85
98 TestFunctional/parallel/SSHCmd 0.64
99 TestFunctional/parallel/CpCmd 1.7
100 TestFunctional/parallel/MySQL 23.31
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.75
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
110 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.22
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.15
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
124 TestFunctional/parallel/ProfileCmd/profile_list 0.35
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
126 TestFunctional/parallel/MountCmd/any-port 6.54
127 TestFunctional/parallel/ServiceCmd/List 0.49
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
130 TestFunctional/parallel/ServiceCmd/Format 0.35
131 TestFunctional/parallel/ServiceCmd/URL 0.33
132 TestFunctional/parallel/DockerEnv/bash 1.19
133 TestFunctional/parallel/Version/short 0.05
134 TestFunctional/parallel/Version/components 0.44
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
140 TestFunctional/parallel/ImageCommands/Setup 1.54
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
142 TestFunctional/parallel/MountCmd/specific-port 1.8
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.12
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.56
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 89.56
160 TestMultiControlPlane/serial/DeployApp 35.56
161 TestMultiControlPlane/serial/PingHostFromPods 0.98
162 TestMultiControlPlane/serial/AddWorkerNode 23.43
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
165 TestMultiControlPlane/serial/CopyFile 15.27
166 TestMultiControlPlane/serial/StopSecondaryNode 11.36
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
168 TestMultiControlPlane/serial/RestartSecondaryNode 35.59
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 200.67
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.24
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 32.5
174 TestMultiControlPlane/serial/RestartCluster 57.87
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.87
176 TestMultiControlPlane/serial/AddSecondaryNode 44.88
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
180 TestImageBuild/serial/Setup 23.82
181 TestImageBuild/serial/NormalBuild 1.71
182 TestImageBuild/serial/BuildWithBuildArg 0.9
183 TestImageBuild/serial/BuildWithDockerIgnore 0.66
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
188 TestJSONOutput/start/Command 36.83
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.53
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.41
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.78
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
213 TestKicCustomNetwork/create_custom_network 25.76
214 TestKicCustomNetwork/use_default_bridge_network 26.06
215 TestKicExistingNetwork 24.65
216 TestKicCustomSubnet 22.81
217 TestKicStaticIP 26.97
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 49.02
222 TestMountStart/serial/StartWithMountFirst 6.64
223 TestMountStart/serial/VerifyMountFirst 0.24
224 TestMountStart/serial/StartWithMountSecond 9.43
225 TestMountStart/serial/VerifyMountSecond 0.23
226 TestMountStart/serial/DeleteFirst 1.44
227 TestMountStart/serial/VerifyMountPostDelete 0.23
228 TestMountStart/serial/Stop 1.16
229 TestMountStart/serial/RestartStopped 7.76
230 TestMountStart/serial/VerifyMountPostStop 0.23
233 TestMultiNode/serial/FreshStart2Nodes 59.96
234 TestMultiNode/serial/DeployApp2Nodes 35.98
235 TestMultiNode/serial/PingHostFrom2Pods 0.67
236 TestMultiNode/serial/AddNode 14.29
237 TestMultiNode/serial/MultiNodeLabels 0.07
238 TestMultiNode/serial/ProfileList 0.63
239 TestMultiNode/serial/CopyFile 8.71
240 TestMultiNode/serial/StopNode 2.08
241 TestMultiNode/serial/StartAfterStop 9.53
242 TestMultiNode/serial/RestartKeepsNodes 102.89
243 TestMultiNode/serial/DeleteNode 5.14
244 TestMultiNode/serial/StopMultiNode 21.23
245 TestMultiNode/serial/RestartMultiNode 51.24
246 TestMultiNode/serial/ValidateNameConflict 26.31
251 TestPreload 80.91
253 TestScheduledStopUnix 96.61
254 TestSkaffold 103.38
256 TestInsufficientStorage 12.39
257 TestRunningBinaryUpgrade 72.02
259 TestKubernetesUpgrade 337.08
260 TestMissingContainerUpgrade 145.63
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
274 TestNoKubernetes/serial/StartWithK8s 30.07
275 TestNoKubernetes/serial/StartWithStopK8s 16.41
276 TestNoKubernetes/serial/Start 10.68
277 TestStoppedBinaryUpgrade/Setup 0.45
278 TestStoppedBinaryUpgrade/Upgrade 118.68
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
280 TestNoKubernetes/serial/ProfileList 0.95
281 TestNoKubernetes/serial/Stop 1.17
282 TestNoKubernetes/serial/StartNoArgs 6.81
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
293 TestPause/serial/Start 61.22
294 TestNetworkPlugins/group/auto/Start 76.69
295 TestNetworkPlugins/group/kindnet/Start 43.66
296 TestPause/serial/SecondStartNoReconfiguration 33.41
297 TestNetworkPlugins/group/auto/KubeletFlags 0.42
298 TestNetworkPlugins/group/auto/NetCatPod 10.32
299 TestNetworkPlugins/group/kindnet/ControllerPod 6
300 TestNetworkPlugins/group/auto/DNS 0.13
301 TestNetworkPlugins/group/auto/Localhost 0.1
302 TestNetworkPlugins/group/auto/HairPin 0.1
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
305 TestPause/serial/Pause 0.55
306 TestPause/serial/VerifyStatus 0.3
307 TestPause/serial/Unpause 0.48
308 TestPause/serial/PauseAgain 0.62
309 TestPause/serial/DeletePaused 2.07
310 TestPause/serial/VerifyDeletedResources 0.8
311 TestNetworkPlugins/group/kindnet/DNS 0.15
312 TestNetworkPlugins/group/kindnet/Localhost 0.13
313 TestNetworkPlugins/group/kindnet/HairPin 0.13
314 TestNetworkPlugins/group/calico/Start 73.45
315 TestNetworkPlugins/group/custom-flannel/Start 44.1
316 TestNetworkPlugins/group/false/Start 74.27
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
319 TestNetworkPlugins/group/custom-flannel/DNS 0.14
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
322 TestNetworkPlugins/group/enable-default-cni/Start 66.88
323 TestNetworkPlugins/group/calico/ControllerPod 6.01
324 TestNetworkPlugins/group/calico/KubeletFlags 0.26
325 TestNetworkPlugins/group/calico/NetCatPod 9.2
326 TestNetworkPlugins/group/bridge/Start 38.72
327 TestNetworkPlugins/group/calico/DNS 0.13
328 TestNetworkPlugins/group/calico/Localhost 0.11
329 TestNetworkPlugins/group/calico/HairPin 0.11
330 TestNetworkPlugins/group/false/KubeletFlags 0.3
331 TestNetworkPlugins/group/false/NetCatPod 10.23
332 TestNetworkPlugins/group/false/DNS 0.24
333 TestNetworkPlugins/group/false/Localhost 0.15
334 TestNetworkPlugins/group/false/HairPin 0.22
335 TestNetworkPlugins/group/flannel/Start 44.2
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
337 TestNetworkPlugins/group/bridge/NetCatPod 10.24
338 TestNetworkPlugins/group/kubenet/Start 67.8
339 TestNetworkPlugins/group/bridge/DNS 0.15
340 TestNetworkPlugins/group/bridge/Localhost 0.11
341 TestNetworkPlugins/group/bridge/HairPin 0.1
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
344 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
345 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
346 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
348 TestStartStop/group/old-k8s-version/serial/FirstStart 104.4
349 TestNetworkPlugins/group/flannel/ControllerPod 6.01
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
351 TestNetworkPlugins/group/flannel/NetCatPod 8.19
352 TestNetworkPlugins/group/flannel/DNS 0.15
353 TestNetworkPlugins/group/flannel/Localhost 0.13
354 TestNetworkPlugins/group/flannel/HairPin 0.12
356 TestStartStop/group/no-preload/serial/FirstStart 41.96
358 TestStartStop/group/embed-certs/serial/FirstStart 37.45
359 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
360 TestNetworkPlugins/group/kubenet/NetCatPod 11.23
361 TestNetworkPlugins/group/kubenet/DNS 0.23
362 TestNetworkPlugins/group/kubenet/Localhost 0.23
363 TestNetworkPlugins/group/kubenet/HairPin 0.17
364 TestStartStop/group/no-preload/serial/DeployApp 9.24
365 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.91
368 TestStartStop/group/no-preload/serial/Stop 10.78
369 TestStartStop/group/embed-certs/serial/DeployApp 9.23
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
371 TestStartStop/group/no-preload/serial/SecondStart 263.37
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
373 TestStartStop/group/embed-certs/serial/Stop 10.9
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
375 TestStartStop/group/embed-certs/serial/SecondStart 263.02
376 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
377 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
378 TestStartStop/group/old-k8s-version/serial/Stop 10.94
379 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/old-k8s-version/serial/SecondStart 143.88
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.46
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.74
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.62
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/old-k8s-version/serial/Pause 2.37
391 TestStartStop/group/newest-cni/serial/FirstStart 26.27
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
394 TestStartStop/group/newest-cni/serial/Stop 10.74
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
396 TestStartStop/group/newest-cni/serial/SecondStart 14.99
397 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
400 TestStartStop/group/newest-cni/serial/Pause 2.49
401 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
402 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
403 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
404 TestStartStop/group/no-preload/serial/Pause 2.38
405 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
408 TestStartStop/group/embed-certs/serial/Pause 2.31
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.35
x
+
TestDownloadOnly/v1.20.0/json-events (7.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-946094 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-946094 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.910841099s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:20:38.225257   10524 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 10:20:38.225355   10524 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-946094
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-946094: exit status 85 (55.456176ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-946094 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |          |
	|         | -p download-only-946094        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:30.350101   10536 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:30.350366   10536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:30.350376   10536 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:30.350383   10536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:30.350553   10536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	W0923 10:20:30.350727   10536 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-3716/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-3716/.minikube/config/config.json: no such file or directory
	I0923 10:20:30.351310   10536 out.go:352] Setting JSON to true
	I0923 10:20:30.352143   10536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":179,"bootTime":1727086651,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:20:30.352238   10536 start.go:139] virtualization: kvm guest
	I0923 10:20:30.354749   10536 out.go:97] [download-only-946094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:20:30.354868   10536 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:20:30.354915   10536 notify.go:220] Checking for updates...
	I0923 10:20:30.356296   10536 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:30.357690   10536 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:30.358958   10536 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:20:30.360283   10536 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	I0923 10:20:30.361632   10536 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:20:30.363996   10536 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:30.364219   10536 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:30.386725   10536 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:30.386794   10536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:30.762234   10536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:20:30.753328814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:30.762380   10536 docker.go:318] overlay module found
	I0923 10:20:30.764156   10536 out.go:97] Using the docker driver based on user configuration
	I0923 10:20:30.764186   10536 start.go:297] selected driver: docker
	I0923 10:20:30.764192   10536 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:30.764278   10536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:30.809861   10536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:20:30.801324343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:30.810038   10536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:30.810559   10536 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 10:20:30.810746   10536 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:30.812635   10536 out.go:169] Using Docker driver with root privileges
	I0923 10:20:30.813953   10536 cni.go:84] Creating CNI manager for ""
	I0923 10:20:30.814025   10536 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 10:20:30.814081   10536 start.go:340] cluster config:
	{Name:download-only-946094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-946094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:30.815463   10536 out.go:97] Starting "download-only-946094" primary control-plane node in "download-only-946094" cluster
	I0923 10:20:30.815484   10536 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:20:30.816811   10536 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:20:30.816832   10536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:30.816967   10536 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:20:30.833622   10536 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:30.833796   10536 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:20:30.833882   10536 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:30.838691   10536 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 10:20:30.838720   10536 cache.go:56] Caching tarball of preloaded images
	I0923 10:20:30.838818   10536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:30.841331   10536 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:20:30.841362   10536 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:20:30.866173   10536 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 10:20:33.302658   10536 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:20:33.302775   10536 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:20:34.104700   10536 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 10:20:34.105042   10536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/download-only-946094/config.json ...
	I0923 10:20:34.105074   10536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/download-only-946094/config.json: {Name:mkd9a262e8b706985bc756252e519b791e421923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:20:34.105237   10536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:34.105424   10536 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19689-3716/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-946094 host does not exist
	  To start a cluster, run: "minikube start -p download-only-946094"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-946094
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-303816 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-303816 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.423883413s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:20:43.017572   10524 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:20:43.017621   10524 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-303816
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-303816: exit status 85 (53.428813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-946094 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-946094        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-946094        | download-only-946094 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-303816 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-303816        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:38.630730   10894 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:38.630974   10894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:38.630983   10894 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:38.630987   10894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:38.631157   10894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:20:38.631682   10894 out.go:352] Setting JSON to true
	I0923 10:20:38.632490   10894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":188,"bootTime":1727086651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:20:38.632586   10894 start.go:139] virtualization: kvm guest
	I0923 10:20:38.634593   10894 out.go:97] [download-only-303816] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:20:38.634740   10894 notify.go:220] Checking for updates...
	I0923 10:20:38.636000   10894 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:38.637362   10894 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:38.638584   10894 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:20:38.639780   10894 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	I0923 10:20:38.640832   10894 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:20:38.642884   10894 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:38.643069   10894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:38.664915   10894 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:38.664980   10894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:38.710071   10894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-23 10:20:38.701220192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:38.710163   10894 docker.go:318] overlay module found
	I0923 10:20:38.711634   10894 out.go:97] Using the docker driver based on user configuration
	I0923 10:20:38.711661   10894 start.go:297] selected driver: docker
	I0923 10:20:38.711668   10894 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:38.711742   10894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:38.755505   10894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-23 10:20:38.747462952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:20:38.755646   10894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:38.756139   10894 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 10:20:38.756268   10894 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:38.757957   10894 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-303816 host does not exist
	  To start a cluster, run: "minikube start -p download-only-303816"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-303816
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-381840 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-381840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-381840
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:20:44.563257   10524 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-885431 --alsologtostderr --binary-mirror http://127.0.0.1:41549 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-885431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-885431
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (48.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-849011 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-849011 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (46.384566373s)
helpers_test.go:175: Cleaning up "offline-docker-849011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-849011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-849011: (2.193638695s)
--- PASS: TestOffline (48.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-071702
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-071702: exit status 85 (53.634622ms)

                                                
                                                
-- stdout --
	* Profile "addons-071702" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-071702"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-071702
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-071702: exit status 85 (48.410854ms)

                                                
                                                
-- stdout --
	* Profile "addons-071702" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-071702"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-071702 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-071702 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m27.466734402s)
--- PASS: TestAddons/Setup (207.47s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 12.454678ms
addons_test.go:843: volcano-admission stabilized in 12.491896ms
addons_test.go:835: volcano-scheduler stabilized in 12.550056ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-649bx" [4bd5a85b-ae27-407b-8f54-3f9d23e75fdf] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002981519s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6nltj" [4d8288c0-35c9-45e8-99ab-bc1131cebfcb] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003787501s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-p6dkj" [ae0a743a-6ea9-4a29-b0fd-a019a34f57fc] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003846793s
addons_test.go:870: (dbg) Run:  kubectl --context addons-071702 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-071702 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-071702 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1a19c28f-458e-4758-8e0b-9f2843161910] Pending
helpers_test.go:344: "test-job-nginx-0" [1a19c28f-458e-4758-8e0b-9f2843161910] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [1a19c28f-458e-4758-8e0b-9f2843161910] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003703114s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable volcano --alsologtostderr -v=1: (10.363580064s)
--- PASS: TestAddons/serial/Volcano (38.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-071702 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-071702 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-071702 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-071702 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-071702 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a57d56d1-658d-447b-8673-7b746d6a3539] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a57d56d1-658d-447b-8673-7b746d6a3539] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003811866s
I0923 10:33:36.980826   10524 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-071702 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable ingress-dns --alsologtostderr -v=1: (1.233258218s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable ingress --alsologtostderr -v=1: (7.570652527s)
--- PASS: TestAddons/parallel/Ingress (21.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gc6ph" [3b63da9e-ce0c-4afa-a6fa-326c409bae6e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00467985s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-071702
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-071702: (5.574788882s)
--- PASS: TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.033597ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4l9jp" [04190cb8-ca8f-459e-bb88-24aa8a774d1d] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003349918s
addons_test.go:413: (dbg) Run:  kubectl --context addons-071702 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.812497ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-071702 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-071702 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4922ee72-29f7-44c1-93e8-bfca01ceea6a] Pending
helpers_test.go:344: "task-pv-pod" [4922ee72-29f7-44c1-93e8-bfca01ceea6a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4922ee72-29f7-44c1-93e8-bfca01ceea6a] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002890754s
addons_test.go:528: (dbg) Run:  kubectl --context addons-071702 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-071702 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-071702 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-071702 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-071702 delete pod task-pv-pod: (1.343829993s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-071702 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-071702 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-071702 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4711bdb5-f150-43f2-915b-ccd667e511a7] Pending
helpers_test.go:344: "task-pv-pod-restore" [4711bdb5-f150-43f2-915b-ccd667e511a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4711bdb5-f150-43f2-915b-ccd667e511a7] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004347247s
addons_test.go:570: (dbg) Run:  kubectl --context addons-071702 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-071702 delete pod task-pv-pod-restore: (1.284340357s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-071702 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-071702 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.46977993s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-071702 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-6kwlm" [845b9b99-5269-45f7-a153-dd904526dbef] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-6kwlm" [845b9b99-5269-45f7-a153-dd904526dbef] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003347747s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable headlamp --alsologtostderr -v=1: (5.557189693s)
--- PASS: TestAddons/parallel/Headlamp (18.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-ngq6z" [97bd0c1b-3299-4919-ab41-c429b75845d6] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.040052472s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-071702
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-071702 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-071702 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [88d9a0a5-b054-41f9-9ba1-58949f74d877] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [88d9a0a5-b054-41f9-9ba1-58949f74d877] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [88d9a0a5-b054-41f9-9ba1-58949f74d877] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003191599s
addons_test.go:938: (dbg) Run:  kubectl --context addons-071702 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 ssh "cat /opt/local-path-provisioner/pvc-1f21215f-8da6-4e9e-aa33-1db8504ddfb9_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-071702 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-071702 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kxghd" [959908a1-ff48-4627-ad29-d0d6134865d7] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003378535s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-071702
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I0923 10:32:53.895668   10524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xxnbx" [45513163-ebe4-4813-b7b0-aad09336b590] Running
I0923 10:32:53.900420   10524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:32:53.900444   10524 kapi.go:107] duration metric: took 4.803035ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003970811s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-071702 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-071702 addons disable yakd --alsologtostderr -v=1: (5.570319935s)
--- PASS: TestAddons/parallel/Yakd (11.58s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.94s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-071702
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-071702: (10.707585488s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-071702
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-071702
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-071702
--- PASS: TestAddons/StoppedEnableDisable (10.94s)

                                                
                                    
x
+
TestCertOptions (29.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-869215 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-869215 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.722802387s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-869215 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-869215 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-869215 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-869215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-869215
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-869215: (3.583524727s)
--- PASS: TestCertOptions (29.91s)

                                                
                                    
x
+
TestCertExpiration (230.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285699 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285699 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.770265389s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285699 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285699 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.749492324s)
helpers_test.go:175: Cleaning up "cert-expiration-285699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-285699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-285699: (2.099568783s)
--- PASS: TestCertExpiration (230.62s)

                                                
                                    
x
+
TestDockerFlags (23.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-695356 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-695356 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.16619958s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-695356 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-695356 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-695356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-695356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-695356: (2.085290891s)
--- PASS: TestDockerFlags (23.84s)

                                                
                                    
x
+
TestForceSystemdFlag (32.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-899327 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-899327 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.345850316s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-899327 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-899327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-899327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-899327: (2.152187414s)
--- PASS: TestForceSystemdFlag (32.90s)

                                                
                                    
x
+
TestForceSystemdEnv (27.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-773761 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-773761 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.031032343s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-773761 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-773761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-773761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-773761: (2.100405336s)
--- PASS: TestForceSystemdEnv (27.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.47s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0923 11:05:23.686013   10524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:05:23.686157   10524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0923 11:05:23.730343   10524 install.go:62] docker-machine-driver-kvm2: exit status 1
W0923 11:05:23.730798   10524 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:05:23.731048   10524 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3267974760/001/docker-machine-driver-kvm2
I0923 11:05:24.188678   10524 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3267974760/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000800aa0 gz:0xc000800aa8 tar:0xc000800a00 tar.bz2:0xc000800a10 tar.gz:0xc000800a20 tar.xz:0xc000800a60 tar.zst:0xc000800a80 tbz2:0xc000800a10 tgz:0xc000800a20 txz:0xc000800a60 tzst:0xc000800a80 xz:0xc000800b20 zip:0xc000800b40 zst:0xc000800b28] Getters:map[file:0xc00079eca0 http:0xc00091a140 https:0xc00091a190] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:05:24.188727   10524 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3267974760/001/docker-machine-driver-kvm2
I0923 11:05:26.595845   10524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:05:26.595928   10524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0923 11:05:26.623815   10524 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0923 11:05:26.623846   10524 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0923 11:05:26.623904   10524 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:05:26.623931   10524 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3267974760/002/docker-machine-driver-kvm2
I0923 11:05:26.972258   10524 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3267974760/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000800aa0 gz:0xc000800aa8 tar:0xc000800a00 tar.bz2:0xc000800a10 tar.gz:0xc000800a20 tar.xz:0xc000800a60 tar.zst:0xc000800a80 tbz2:0xc000800a10 tgz:0xc000800a20 txz:0xc000800a60 tzst:0xc000800a80 xz:0xc000800b20 zip:0xc000800b40 zst:0xc000800b28] Getters:map[file:0xc0020c45b0 http:0xc00205c280 https:0xc00205c2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:05:26.972295   10524 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3267974760/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.47s)

                                                
                                    
x
+
TestErrorSpam/setup (21.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-373706 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-373706 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-373706 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-373706 --driver=docker  --container-runtime=docker: (21.047182938s)
--- PASS: TestErrorSpam/setup (21.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 pause
--- PASS: TestErrorSpam/pause (1.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (10.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 stop: (10.633158082s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373706 --log_dir /tmp/nospam-373706 stop
--- PASS: TestErrorSpam/stop (10.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-3716/.minikube/files/etc/test/nested/copy/10524/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (33.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-001676 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (33.84808492s)
--- PASS: TestFunctional/serial/StartWithProxy (33.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:35:32.043276   10524 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-001676 --alsologtostderr -v=8: (29.525854126s)
functional_test.go:663: soft start took 29.526655971s for "functional-001676" cluster.
I0923 10:36:01.569481   10524 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-001676 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-001676 /tmp/TestFunctionalserialCacheCmdcacheadd_local4016149979/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache add minikube-local-cache-test:functional-001676
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache delete minikube-local-cache-test:functional-001676
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-001676
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (255.946682ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 kubectl -- --context functional-001676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-001676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-001676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.512745378s)
functional_test.go:761: restart took 40.512864608s for "functional-001676" cluster.
I0923 10:36:47.770770   10524 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-001676 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 logs --file /tmp/TestFunctionalserialLogsFileCmd2216696667/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-001676 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-001676
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-001676: exit status 115 (304.698429ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30826 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-001676 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 config get cpus: exit status 14 (50.568707ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 config get cpus: exit status 14 (72.015593ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-001676 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-001676 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 63687: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-001676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (151.85127ms)

                                                
                                                
-- stdout --
	* [functional-001676] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:37:08.176286   62746 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:37:08.176384   62746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:37:08.176393   62746 out.go:358] Setting ErrFile to fd 2...
	I0923 10:37:08.176397   62746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:37:08.176554   62746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:37:08.177028   62746 out.go:352] Setting JSON to false
	I0923 10:37:08.178275   62746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1177,"bootTime":1727086651,"procs":435,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:37:08.178364   62746 start.go:139] virtualization: kvm guest
	I0923 10:37:08.180515   62746 out.go:177] * [functional-001676] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:37:08.181976   62746 notify.go:220] Checking for updates...
	I0923 10:37:08.181986   62746 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:37:08.183648   62746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:37:08.185264   62746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:37:08.186558   62746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	I0923 10:37:08.187855   62746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:37:08.189357   62746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:37:08.191409   62746 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:37:08.192079   62746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:37:08.218775   62746 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:37:08.218908   62746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:37:08.274351   62746 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-23 10:37:08.264401523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:37:08.274490   62746 docker.go:318] overlay module found
	I0923 10:37:08.276211   62746 out.go:177] * Using the docker driver based on existing profile
	I0923 10:37:08.277404   62746 start.go:297] selected driver: docker
	I0923 10:37:08.277423   62746 start.go:901] validating driver "docker" against &{Name:functional-001676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-001676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:37:08.277533   62746 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:37:08.280028   62746 out.go:201] 
	W0923 10:37:08.281549   62746 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:37:08.282930   62746 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-001676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-001676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.720762ms)

                                                
                                                
-- stdout --
	* [functional-001676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:37:08.000715   62584 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:37:08.000816   62584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:37:08.000827   62584 out.go:358] Setting ErrFile to fd 2...
	I0923 10:37:08.000832   62584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:37:08.001152   62584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:37:08.001837   62584 out.go:352] Setting JSON to false
	I0923 10:37:08.003180   62584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1177,"bootTime":1727086651,"procs":440,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:37:08.003294   62584 start.go:139] virtualization: kvm guest
	I0923 10:37:08.005785   62584 out.go:177] * [functional-001676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 10:37:08.007365   62584 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:37:08.007373   62584 notify.go:220] Checking for updates...
	I0923 10:37:08.009948   62584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:37:08.011315   62584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	I0923 10:37:08.012610   62584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	I0923 10:37:08.014003   62584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:37:08.015265   62584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:37:08.017362   62584 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:37:08.018137   62584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:37:08.049582   62584 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:37:08.049688   62584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:37:08.122284   62584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:37:08.111222496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:37:08.122382   62584 docker.go:318] overlay module found
	I0923 10:37:08.124003   62584 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 10:37:08.125332   62584 start.go:297] selected driver: docker
	I0923 10:37:08.125347   62584 start.go:901] validating driver "docker" against &{Name:functional-001676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-001676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:37:08.125421   62584 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:37:08.127605   62584 out.go:201] 
	W0923 10:37:08.128931   62584 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:37:08.130312   62584 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-001676 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-001676 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g5bnv" [bf766e62-2119-44dd-b124-1c41e105e9ec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g5bnv" [bf766e62-2119-44dd-b124-1c41e105e9ec] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00431338s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31656
functional_test.go:1675: http://192.168.49.2:31656: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-g5bnv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31656
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33d86e38-f1aa-4e99-b44b-bc909ff3a2cd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.082432585s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-001676 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-001676 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-001676 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-001676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4c941eb-ebf0-4b92-80d2-d0a051b9c280] Pending
helpers_test.go:344: "sp-pod" [a4c941eb-ebf0-4b92-80d2-d0a051b9c280] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a4c941eb-ebf0-4b92-80d2-d0a051b9c280] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004135735s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-001676 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-001676 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-001676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6153546e-fde9-43bb-8f7d-b5de668ebc72] Pending
helpers_test.go:344: "sp-pod" [6153546e-fde9-43bb-8f7d-b5de668ebc72] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6153546e-fde9-43bb-8f7d-b5de668ebc72] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003902039s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-001676 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh -n functional-001676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cp functional-001676:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd324710965/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh -n functional-001676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh -n functional-001676 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-001676 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-qd8lt" [a19bc509-6d51-46fb-ba73-bc7dbcfeae55] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-qd8lt" [a19bc509-6d51-46fb-ba73-bc7dbcfeae55] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.00366085s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-001676 exec mysql-6cdb49bbb-qd8lt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-001676 exec mysql-6cdb49bbb-qd8lt -- mysql -ppassword -e "show databases;": exit status 1 (110.721864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:37:35.902070   10524 retry.go:31] will retry after 988.734198ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-001676 exec mysql-6cdb49bbb-qd8lt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-001676 exec mysql-6cdb49bbb-qd8lt -- mysql -ppassword -e "show databases;": exit status 1 (100.255756ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:37:36.992258   10524 retry.go:31] will retry after 1.817856233s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-001676 exec mysql-6cdb49bbb-qd8lt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10524/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /etc/test/nested/copy/10524/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10524.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /etc/ssl/certs/10524.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10524.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /usr/share/ca-certificates/10524.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/105242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /etc/ssl/certs/105242.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/105242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /usr/share/ca-certificates/105242.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-001676 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh "sudo systemctl is-active crio": exit status 1 (241.345336ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 58734: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-001676 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [28eae327-e907-4a74-a1ab-55d6cc4b3d6a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [28eae327-e907-4a74-a1ab-55d6cc4b3d6a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003959896s
I0923 10:37:04.817807   10524 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-001676 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-001676 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4vf56" [f685680a-d3db-4024-9f65-32e77b939618] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-4vf56" [f685680a-d3db-4024-9f65-32e77b939618] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003508671s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-001676 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.230.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-001676 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "305.124953ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.18019ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "299.31803ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.364461ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdany-port1227984276/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727087826053901626" to /tmp/TestFunctionalparallelMountCmdany-port1227984276/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727087826053901626" to /tmp/TestFunctionalparallelMountCmdany-port1227984276/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727087826053901626" to /tmp/TestFunctionalparallelMountCmdany-port1227984276/001/test-1727087826053901626
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.925915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:37:06.300149   10524 retry.go:31] will retry after 330.479917ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 10:37 test-1727087826053901626
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh cat /mount-9p/test-1727087826053901626
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-001676 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a55bce5f-4c0a-4008-a024-bb7c8b27a6e2] Pending
helpers_test.go:344: "busybox-mount" [a55bce5f-4c0a-4008-a024-bb7c8b27a6e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a55bce5f-4c0a-4008-a024-bb7c8b27a6e2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a55bce5f-4c0a-4008-a024-bb7c8b27a6e2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003804909s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-001676 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdany-port1227984276/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service list -o json
functional_test.go:1494: Took "492.671173ms" to run "out/minikube-linux-amd64 -p functional-001676 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32759
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32759
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-001676 docker-env) && out/minikube-linux-amd64 status -p functional-001676"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-001676 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-001676 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-001676
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-001676
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-001676 image ls --format short --alsologtostderr:
I0923 10:37:18.620759   67187 out.go:345] Setting OutFile to fd 1 ...
I0923 10:37:18.621010   67187 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:18.621019   67187 out.go:358] Setting ErrFile to fd 2...
I0923 10:37:18.621022   67187 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:18.621195   67187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
I0923 10:37:18.621764   67187 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:18.621853   67187 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:18.622222   67187 cli_runner.go:164] Run: docker container inspect functional-001676 --format={{.State.Status}}
I0923 10:37:18.638594   67187 ssh_runner.go:195] Run: systemctl --version
I0923 10:37:18.638640   67187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-001676
I0923 10:37:18.655541   67187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/functional-001676/id_rsa Username:docker}
I0923 10:37:18.746830   67187 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-001676 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-001676 | 04a38bf971375 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-001676 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-001676 image ls --format table --alsologtostderr:
I0923 10:37:19.208808   67467 out.go:345] Setting OutFile to fd 1 ...
I0923 10:37:19.209041   67467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.209050   67467 out.go:358] Setting ErrFile to fd 2...
I0923 10:37:19.209055   67467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.209219   67467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
I0923 10:37:19.209797   67467 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.209890   67467 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.210267   67467 cli_runner.go:164] Run: docker container inspect functional-001676 --format={{.State.Status}}
I0923 10:37:19.228144   67467 ssh_runner.go:195] Run: systemctl --version
I0923 10:37:19.228209   67467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-001676
I0923 10:37:19.245303   67467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/functional-001676/id_rsa Username:docker}
I0923 10:37:19.334964   67467 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-001676 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0f
be50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["regi
stry.k8s.io/pause:latest"],"size":"240000"},{"id":"04a38bf971375da06098ad3e20626e14475ff2961203e3bdab61647c0911c27c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-001676"],"size":"30"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-001676"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"
246000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-001676 image ls --format json --alsologtostderr:
I0923 10:37:19.013476   67380 out.go:345] Setting OutFile to fd 1 ...
I0923 10:37:19.013576   67380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.013585   67380 out.go:358] Setting ErrFile to fd 2...
I0923 10:37:19.013589   67380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.013816   67380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
I0923 10:37:19.014378   67380 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.014490   67380 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.014904   67380 cli_runner.go:164] Run: docker container inspect functional-001676 --format={{.State.Status}}
I0923 10:37:19.032205   67380 ssh_runner.go:195] Run: systemctl --version
I0923 10:37:19.032272   67380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-001676
I0923 10:37:19.053042   67380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/functional-001676/id_rsa Username:docker}
I0923 10:37:19.142514   67380 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-001676 image ls --format yaml --alsologtostderr:
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-001676
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 04a38bf971375da06098ad3e20626e14475ff2961203e3bdab61647c0911c27c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-001676
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-001676 image ls --format yaml --alsologtostderr:
I0923 10:37:18.816266   67237 out.go:345] Setting OutFile to fd 1 ...
I0923 10:37:18.816394   67237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:18.816405   67237 out.go:358] Setting ErrFile to fd 2...
I0923 10:37:18.816410   67237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:18.816719   67237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
I0923 10:37:18.817574   67237 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:18.817722   67237 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:18.818267   67237 cli_runner.go:164] Run: docker container inspect functional-001676 --format={{.State.Status}}
I0923 10:37:18.836911   67237 ssh_runner.go:195] Run: systemctl --version
I0923 10:37:18.836961   67237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-001676
I0923 10:37:18.853578   67237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/functional-001676/id_rsa Username:docker}
I0923 10:37:18.946777   67237 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh pgrep buildkitd: exit status 1 (245.89794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image build -t localhost/my-image:functional-001676 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-001676 image build -t localhost/my-image:functional-001676 testdata/build --alsologtostderr: (3.097496539s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-001676 image build -t localhost/my-image:functional-001676 testdata/build --alsologtostderr:
I0923 10:37:19.075033   67406 out.go:345] Setting OutFile to fd 1 ...
I0923 10:37:19.075362   67406 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.075376   67406 out.go:358] Setting ErrFile to fd 2...
I0923 10:37:19.075382   67406 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:37:19.075650   67406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
I0923 10:37:19.076250   67406 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.076741   67406 config.go:182] Loaded profile config "functional-001676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:37:19.077140   67406 cli_runner.go:164] Run: docker container inspect functional-001676 --format={{.State.Status}}
I0923 10:37:19.094510   67406 ssh_runner.go:195] Run: systemctl --version
I0923 10:37:19.094569   67406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-001676
I0923 10:37:19.111599   67406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/functional-001676/id_rsa Username:docker}
I0923 10:37:19.203512   67406 build_images.go:161] Building image from path: /tmp/build.3325866825.tar
I0923 10:37:19.203560   67406 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:37:19.212206   67406 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3325866825.tar
I0923 10:37:19.215415   67406 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3325866825.tar: stat -c "%s %y" /var/lib/minikube/build/build.3325866825.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3325866825.tar': No such file or directory
I0923 10:37:19.215442   67406 ssh_runner.go:362] scp /tmp/build.3325866825.tar --> /var/lib/minikube/build/build.3325866825.tar (3072 bytes)
I0923 10:37:19.237559   67406 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3325866825
I0923 10:37:19.245829   67406 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3325866825 -xf /var/lib/minikube/build/build.3325866825.tar
I0923 10:37:19.253739   67406 docker.go:360] Building image: /var/lib/minikube/build/build.3325866825
I0923 10:37:19.253793   67406 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-001676 /var/lib/minikube/build/build.3325866825
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c0a5c60d0640c304b0c9a0be4dde7ea21096ed170c920f6e6b000b7ba84a0d86 done
#8 naming to localhost/my-image:functional-001676 done
#8 DONE 0.0s
I0923 10:37:22.102451   67406 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-001676 /var/lib/minikube/build/build.3325866825: (2.848626012s)
I0923 10:37:22.102545   67406 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3325866825
I0923 10:37:22.112567   67406 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3325866825.tar
I0923 10:37:22.121259   67406 build_images.go:217] Built localhost/my-image:functional-001676 from /tmp/build.3325866825.tar
I0923 10:37:22.121290   67406 build_images.go:133] succeeded building to: functional-001676
I0923 10:37:22.121296   67406 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.516376369s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-001676
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image load --daemon kicbase/echo-server:functional-001676 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-001676 image load --daemon kicbase/echo-server:functional-001676 --alsologtostderr: (1.034528429s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdspecific-port2338337770/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.711048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:37:12.920621   10524 retry.go:31] will retry after 407.928249ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdspecific-port2338337770/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-001676 ssh "sudo umount -f /mount-9p": exit status 1 (268.651848ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-001676 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdspecific-port2338337770/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image load --daemon kicbase/echo-server:functional-001676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-001676
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image load --daemon kicbase/echo-server:functional-001676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-001676 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-001676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup575117155/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 update-context --alsologtostderr -v=2
2024/09/23 10:37:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image save kicbase/echo-server:functional-001676 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image rm kicbase/echo-server:functional-001676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-001676
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-001676 image save --daemon kicbase/echo-server:functional-001676 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-001676
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-001676
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-001676
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-001676
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (89.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-363563 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-363563 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m28.906673449s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (89.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- rollout status deployment/busybox
E0923 10:39:12.814831   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:12.821202   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:12.832511   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:12.853899   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:12.895313   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:12.976763   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:13.138207   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:13.459895   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-363563 -- rollout status deployment/busybox: (2.608006272s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0923 10:39:14.101423   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:14.210803   10524 retry.go:31] will retry after 679.088157ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:14.995735   10524 retry.go:31] will retry after 1.343896792s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
E0923 10:39:15.383205   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:16.447910   10524 retry.go:31] will retry after 2.758169875s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
E0923 10:39:17.944566   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:19.316052   10524 retry.go:31] will retry after 3.078085041s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:22.504228   10524 retry.go:31] will retry after 6.785192974s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
E0923 10:39:23.066015   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:29.401103   10524 retry.go:31] will retry after 7.002491079s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
E0923 10:39:33.307426   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
I0923 10:39:36.511360   10524 retry.go:31] will retry after 8.695813533s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-7zwtb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-l4k7d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-zkq6g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-7zwtb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-l4k7d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-zkq6g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-7zwtb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-l4k7d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-zkq6g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-7zwtb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-7zwtb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-l4k7d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-l4k7d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-zkq6g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-363563 -- exec busybox-7dff88458-zkq6g -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-363563 -v=7 --alsologtostderr
E0923 10:39:53.789529   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-363563 -v=7 --alsologtostderr: (22.629051903s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-363563 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp testdata/cp-test.txt ha-363563:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4683997/001/cp-test_ha-363563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563:/home/docker/cp-test.txt ha-363563-m02:/home/docker/cp-test_ha-363563_ha-363563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test_ha-363563_ha-363563-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563:/home/docker/cp-test.txt ha-363563-m03:/home/docker/cp-test_ha-363563_ha-363563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test_ha-363563_ha-363563-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563:/home/docker/cp-test.txt ha-363563-m04:/home/docker/cp-test_ha-363563_ha-363563-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test_ha-363563_ha-363563-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp testdata/cp-test.txt ha-363563-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4683997/001/cp-test_ha-363563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m02:/home/docker/cp-test.txt ha-363563:/home/docker/cp-test_ha-363563-m02_ha-363563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test_ha-363563-m02_ha-363563.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m02:/home/docker/cp-test.txt ha-363563-m03:/home/docker/cp-test_ha-363563-m02_ha-363563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test_ha-363563-m02_ha-363563-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m02:/home/docker/cp-test.txt ha-363563-m04:/home/docker/cp-test_ha-363563-m02_ha-363563-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test_ha-363563-m02_ha-363563-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp testdata/cp-test.txt ha-363563-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4683997/001/cp-test_ha-363563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m03:/home/docker/cp-test.txt ha-363563:/home/docker/cp-test_ha-363563-m03_ha-363563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test_ha-363563-m03_ha-363563.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m03:/home/docker/cp-test.txt ha-363563-m02:/home/docker/cp-test_ha-363563-m03_ha-363563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test_ha-363563-m03_ha-363563-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m03:/home/docker/cp-test.txt ha-363563-m04:/home/docker/cp-test_ha-363563-m03_ha-363563-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test_ha-363563-m03_ha-363563-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp testdata/cp-test.txt ha-363563-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4683997/001/cp-test_ha-363563-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m04:/home/docker/cp-test.txt ha-363563:/home/docker/cp-test_ha-363563-m04_ha-363563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563 "sudo cat /home/docker/cp-test_ha-363563-m04_ha-363563.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m04:/home/docker/cp-test.txt ha-363563-m02:/home/docker/cp-test_ha-363563-m04_ha-363563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m02 "sudo cat /home/docker/cp-test_ha-363563-m04_ha-363563-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 cp ha-363563-m04:/home/docker/cp-test.txt ha-363563-m03:/home/docker/cp-test_ha-363563-m04_ha-363563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 ssh -n ha-363563-m03 "sudo cat /home/docker/cp-test_ha-363563-m04_ha-363563-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 node stop m02 -v=7 --alsologtostderr
E0923 10:40:34.750834   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-363563 node stop m02 -v=7 --alsologtostderr: (10.715050639s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr: exit status 7 (639.827462ms)

                                                
                                                
-- stdout --
	ha-363563
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-363563-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363563-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-363563-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:40:38.219933   95665 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:38.220028   95665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:38.220035   95665 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:38.220039   95665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:38.220184   95665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:40:38.220332   95665 out.go:352] Setting JSON to false
	I0923 10:40:38.220368   95665 mustload.go:65] Loading cluster: ha-363563
	I0923 10:40:38.220459   95665 notify.go:220] Checking for updates...
	I0923 10:40:38.220789   95665 config.go:182] Loaded profile config "ha-363563": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:40:38.220808   95665 status.go:174] checking status of ha-363563 ...
	I0923 10:40:38.221206   95665 cli_runner.go:164] Run: docker container inspect ha-363563 --format={{.State.Status}}
	I0923 10:40:38.238521   95665 status.go:364] ha-363563 host status = "Running" (err=<nil>)
	I0923 10:40:38.238547   95665 host.go:66] Checking if "ha-363563" exists ...
	I0923 10:40:38.238871   95665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-363563
	I0923 10:40:38.255938   95665 host.go:66] Checking if "ha-363563" exists ...
	I0923 10:40:38.256239   95665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:40:38.256282   95665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-363563
	I0923 10:40:38.276266   95665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/ha-363563/id_rsa Username:docker}
	I0923 10:40:38.376631   95665 ssh_runner.go:195] Run: systemctl --version
	I0923 10:40:38.380680   95665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:40:38.391657   95665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:40:38.437369   95665 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-23 10:40:38.428792784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:40:38.437924   95665 kubeconfig.go:125] found "ha-363563" server: "https://192.168.49.254:8443"
	I0923 10:40:38.437956   95665 api_server.go:166] Checking apiserver status ...
	I0923 10:40:38.437994   95665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:40:38.448796   95665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2397/cgroup
	I0923 10:40:38.457139   95665 api_server.go:182] apiserver freezer: "4:freezer:/docker/05bcff25d342d955917e7212071a0f1774500c85c4cc711e04b1fe17b029eb40/kubepods/burstable/podb12831a5f24211f45b15d52bfaf97341/51a22942ff3189a6bbbe0560207a5af9bb13a7f542e389e3c642fecc69e5fb7b"
	I0923 10:40:38.457190   95665 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/05bcff25d342d955917e7212071a0f1774500c85c4cc711e04b1fe17b029eb40/kubepods/burstable/podb12831a5f24211f45b15d52bfaf97341/51a22942ff3189a6bbbe0560207a5af9bb13a7f542e389e3c642fecc69e5fb7b/freezer.state
	I0923 10:40:38.464632   95665 api_server.go:204] freezer state: "THAWED"
	I0923 10:40:38.464658   95665 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:40:38.468167   95665 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:40:38.468187   95665 status.go:456] ha-363563 apiserver status = Running (err=<nil>)
	I0923 10:40:38.468198   95665 status.go:176] ha-363563 status: &{Name:ha-363563 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:40:38.468216   95665 status.go:174] checking status of ha-363563-m02 ...
	I0923 10:40:38.468500   95665 cli_runner.go:164] Run: docker container inspect ha-363563-m02 --format={{.State.Status}}
	I0923 10:40:38.486048   95665 status.go:364] ha-363563-m02 host status = "Stopped" (err=<nil>)
	I0923 10:40:38.486065   95665 status.go:377] host is not running, skipping remaining checks
	I0923 10:40:38.486070   95665 status.go:176] ha-363563-m02 status: &{Name:ha-363563-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:40:38.486089   95665 status.go:174] checking status of ha-363563-m03 ...
	I0923 10:40:38.486301   95665 cli_runner.go:164] Run: docker container inspect ha-363563-m03 --format={{.State.Status}}
	I0923 10:40:38.502645   95665 status.go:364] ha-363563-m03 host status = "Running" (err=<nil>)
	I0923 10:40:38.502663   95665 host.go:66] Checking if "ha-363563-m03" exists ...
	I0923 10:40:38.502945   95665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-363563-m03
	I0923 10:40:38.518148   95665 host.go:66] Checking if "ha-363563-m03" exists ...
	I0923 10:40:38.518369   95665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:40:38.518401   95665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-363563-m03
	I0923 10:40:38.534259   95665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/ha-363563-m03/id_rsa Username:docker}
	I0923 10:40:38.623349   95665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:40:38.633598   95665 kubeconfig.go:125] found "ha-363563" server: "https://192.168.49.254:8443"
	I0923 10:40:38.633632   95665 api_server.go:166] Checking apiserver status ...
	I0923 10:40:38.633665   95665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:40:38.643383   95665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2267/cgroup
	I0923 10:40:38.651347   95665 api_server.go:182] apiserver freezer: "4:freezer:/docker/a33b07b89b858d66d51b0a0a47ba41f4a61ce6174f66fe144e77f43c460eab8b/kubepods/burstable/podadfc70cf7f73006ff1b2ea6e10c06b62/be19e733ed078a4800f1db73677937153a9bba72e9abb766a0738190463099fa"
	I0923 10:40:38.651393   95665 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a33b07b89b858d66d51b0a0a47ba41f4a61ce6174f66fe144e77f43c460eab8b/kubepods/burstable/podadfc70cf7f73006ff1b2ea6e10c06b62/be19e733ed078a4800f1db73677937153a9bba72e9abb766a0738190463099fa/freezer.state
	I0923 10:40:38.658731   95665 api_server.go:204] freezer state: "THAWED"
	I0923 10:40:38.658753   95665 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:40:38.662196   95665 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:40:38.662219   95665 status.go:456] ha-363563-m03 apiserver status = Running (err=<nil>)
	I0923 10:40:38.662230   95665 status.go:176] ha-363563-m03 status: &{Name:ha-363563-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:40:38.662252   95665 status.go:174] checking status of ha-363563-m04 ...
	I0923 10:40:38.662467   95665 cli_runner.go:164] Run: docker container inspect ha-363563-m04 --format={{.State.Status}}
	I0923 10:40:38.680224   95665 status.go:364] ha-363563-m04 host status = "Running" (err=<nil>)
	I0923 10:40:38.680246   95665 host.go:66] Checking if "ha-363563-m04" exists ...
	I0923 10:40:38.681030   95665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-363563-m04
	I0923 10:40:38.697402   95665 host.go:66] Checking if "ha-363563-m04" exists ...
	I0923 10:40:38.697713   95665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:40:38.697748   95665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-363563-m04
	I0923 10:40:38.714429   95665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/ha-363563-m04/id_rsa Username:docker}
	I0923 10:40:38.807419   95665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:40:38.817362   95665 status.go:176] ha-363563-m04 status: &{Name:ha-363563-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-363563 node start m02 -v=7 --alsologtostderr: (34.662949387s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-363563 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-363563 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-363563 -v=7 --alsologtostderr: (33.807084507s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-363563 --wait=true -v=7 --alsologtostderr
E0923 10:41:54.599534   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.605891   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.618058   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.639417   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.680792   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.762550   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:54.924293   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:55.245523   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:55.887590   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:56.673159   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:57.169151   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:41:59.730810   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:42:04.853093   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:42:15.094389   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:42:35.576773   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:16.538162   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:12.815274   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-363563 --wait=true -v=7 --alsologtostderr: (2m46.775637147s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-363563
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 node delete m03 -v=7 --alsologtostderr
E0923 10:44:38.459909   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:40.515423   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-363563 node delete m03 -v=7 --alsologtostderr: (8.488285242s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-363563 stop -v=7 --alsologtostderr: (32.400206719s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr: exit status 7 (101.730772ms)

                                                
                                                
-- stdout --
	ha-363563
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363563-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363563-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:45:19.041469  125929 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:45:19.041609  125929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:45:19.041619  125929 out.go:358] Setting ErrFile to fd 2...
	I0923 10:45:19.041624  125929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:45:19.041825  125929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:45:19.042101  125929 out.go:352] Setting JSON to false
	I0923 10:45:19.042134  125929 mustload.go:65] Loading cluster: ha-363563
	I0923 10:45:19.042237  125929 notify.go:220] Checking for updates...
	I0923 10:45:19.042606  125929 config.go:182] Loaded profile config "ha-363563": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:45:19.042627  125929 status.go:174] checking status of ha-363563 ...
	I0923 10:45:19.043078  125929 cli_runner.go:164] Run: docker container inspect ha-363563 --format={{.State.Status}}
	I0923 10:45:19.060858  125929 status.go:364] ha-363563 host status = "Stopped" (err=<nil>)
	I0923 10:45:19.060885  125929 status.go:377] host is not running, skipping remaining checks
	I0923 10:45:19.060892  125929 status.go:176] ha-363563 status: &{Name:ha-363563 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:45:19.060916  125929 status.go:174] checking status of ha-363563-m02 ...
	I0923 10:45:19.061190  125929 cli_runner.go:164] Run: docker container inspect ha-363563-m02 --format={{.State.Status}}
	I0923 10:45:19.077686  125929 status.go:364] ha-363563-m02 host status = "Stopped" (err=<nil>)
	I0923 10:45:19.077711  125929 status.go:377] host is not running, skipping remaining checks
	I0923 10:45:19.077719  125929 status.go:176] ha-363563-m02 status: &{Name:ha-363563-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:45:19.077757  125929 status.go:174] checking status of ha-363563-m04 ...
	I0923 10:45:19.078113  125929 cli_runner.go:164] Run: docker container inspect ha-363563-m04 --format={{.State.Status}}
	I0923 10:45:19.098269  125929 status.go:364] ha-363563-m04 host status = "Stopped" (err=<nil>)
	I0923 10:45:19.098310  125929 status.go:377] host is not running, skipping remaining checks
	I0923 10:45:19.098319  125929 status.go:176] ha-363563-m04 status: &{Name:ha-363563-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-363563 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-363563 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (56.831872997s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-363563 --control-plane -v=7 --alsologtostderr
E0923 10:46:54.599521   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-363563 --control-plane -v=7 --alsologtostderr: (44.069133767s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-363563 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-375904 --driver=docker  --container-runtime=docker
E0923 10:47:22.301243   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-375904 --driver=docker  --container-runtime=docker: (23.817688059s)
--- PASS: TestImageBuild/serial/Setup (23.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-375904
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-375904: (1.714247211s)
--- PASS: TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-375904
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-375904
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-375904
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-123185 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-123185 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (36.829995173s)
--- PASS: TestJSONOutput/start/Command (36.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-123185 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-123185 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-123185 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-123185 --output=json --user=testUser: (10.778934799s)
--- PASS: TestJSONOutput/stop/Command (10.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-519959 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-519959 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.772825ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56b3f63f-7b51-42e5-a6e8-cac3a96e4627","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-519959] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7069c6b9-d3d5-4ae5-8518-78a27e609ea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"ef5ad27c-80b5-413e-acf7-4b2c3e08f5fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82d71cff-087e-4368-aaa0-bb36db67cb33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig"}}
	{"specversion":"1.0","id":"d7b1621a-46f7-4ea1-a410-af311b661c79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube"}}
	{"specversion":"1.0","id":"df0e3680-d299-4d80-a7d9-20daf8fe2322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d6149d4d-7b38-4964-836e-1f2850a3f5ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee91d73d-e581-4a2c-9a9e-630c6c14647b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-519959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-519959
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-417216 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-417216 --network=: (23.711131516s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-417216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-417216
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-417216: (2.032066261s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-093841 --network=bridge
E0923 10:49:12.814881   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-093841 --network=bridge: (24.157901775s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-093841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-093841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-093841: (1.884053963s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.06s)

                                                
                                    
x
+
TestKicExistingNetwork (24.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 10:49:23.042281   10524 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 10:49:23.059324   10524 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 10:49:23.059411   10524 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 10:49:23.059434   10524 cli_runner.go:164] Run: docker network inspect existing-network
W0923 10:49:23.074373   10524 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 10:49:23.074400   10524 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 10:49:23.074413   10524 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 10:49:23.074549   10524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:49:23.090082   10524 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b975ebfa2481 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:32:62:df:73} reservation:<nil>}
I0923 10:49:23.090486   10524 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00149fcc0}
I0923 10:49:23.090507   10524 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 10:49:23.090542   10524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 10:49:23.148519   10524 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-540618 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-540618 --network=existing-network: (22.636944161s)
helpers_test.go:175: Cleaning up "existing-network-540618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-540618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-540618: (1.873322187s)
I0923 10:49:47.675213   10524 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.65s)

                                                
                                    
x
+
TestKicCustomSubnet (22.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-959124 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-959124 --subnet=192.168.60.0/24: (20.739801693s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-959124 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-959124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-959124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-959124: (2.048916971s)
--- PASS: TestKicCustomSubnet (22.81s)

                                                
                                    
x
+
TestKicStaticIP (26.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-722418 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-722418 --static-ip=192.168.200.200: (24.828276955s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-722418 ip
helpers_test.go:175: Cleaning up "static-ip-722418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-722418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-722418: (2.029112605s)
--- PASS: TestKicStaticIP (26.97s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-967131 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-967131 --driver=docker  --container-runtime=docker: (20.43876814s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-976963 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-976963 --driver=docker  --container-runtime=docker: (23.573697868s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-967131
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-976963
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-976963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-976963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-976963: (1.93491506s)
helpers_test.go:175: Cleaning up "first-967131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-967131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-967131: (1.978907028s)
--- PASS: TestMinikubeProfile (49.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-887419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-887419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.639938339s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-887419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-899382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-899382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.429002433s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899382 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-887419 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-887419 --alsologtostderr -v=5: (1.436973093s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899382 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-899382
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-899382: (1.163813017s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-899382
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-899382: (6.759087668s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899382 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (59.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-261976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-261976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.449711272s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (59.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (35.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-261976 -- rollout status deployment/busybox: (2.476671177s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:52:58.197878   10524 retry.go:31] will retry after 724.305013ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:52:59.028848   10524 retry.go:31] will retry after 1.101451991s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:53:00.232503   10524 retry.go:31] will retry after 2.381084785s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:53:02.717387   10524 retry.go:31] will retry after 2.078785287s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:53:04.900766   10524 retry.go:31] will retry after 4.255348308s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:53:09.262962   10524 retry.go:31] will retry after 10.623489495s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 10:53:19.995178   10524 retry.go:31] will retry after 10.231386802s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-5w62z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-bw2mb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-5w62z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-bw2mb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-5w62z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-bw2mb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (35.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-5w62z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-5w62z -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-bw2mb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-261976 -- exec busybox-7dff88458-bw2mb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-261976 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-261976 -v 3 --alsologtostderr: (13.637572189s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-261976 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp testdata/cp-test.txt multinode-261976:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141972160/001/cp-test_multinode-261976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976:/home/docker/cp-test.txt multinode-261976-m02:/home/docker/cp-test_multinode-261976_multinode-261976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test_multinode-261976_multinode-261976-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976:/home/docker/cp-test.txt multinode-261976-m03:/home/docker/cp-test_multinode-261976_multinode-261976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test_multinode-261976_multinode-261976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp testdata/cp-test.txt multinode-261976-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141972160/001/cp-test_multinode-261976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m02:/home/docker/cp-test.txt multinode-261976:/home/docker/cp-test_multinode-261976-m02_multinode-261976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test_multinode-261976-m02_multinode-261976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m02:/home/docker/cp-test.txt multinode-261976-m03:/home/docker/cp-test_multinode-261976-m02_multinode-261976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test_multinode-261976-m02_multinode-261976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp testdata/cp-test.txt multinode-261976-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141972160/001/cp-test_multinode-261976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m03:/home/docker/cp-test.txt multinode-261976:/home/docker/cp-test_multinode-261976-m03_multinode-261976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976 "sudo cat /home/docker/cp-test_multinode-261976-m03_multinode-261976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 cp multinode-261976-m03:/home/docker/cp-test.txt multinode-261976-m02:/home/docker/cp-test_multinode-261976-m03_multinode-261976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 ssh -n multinode-261976-m02 "sudo cat /home/docker/cp-test_multinode-261976-m03_multinode-261976-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-261976 node stop m03: (1.165646917s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-261976 status: exit status 7 (454.422348ms)

                                                
                                                
-- stdout --
	multinode-261976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr: exit status 7 (455.372738ms)

                                                
                                                
-- stdout --
	multinode-261976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:53:57.458189  211528 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:53:57.458285  211528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:53:57.458292  211528 out.go:358] Setting ErrFile to fd 2...
	I0923 10:53:57.458306  211528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:53:57.458458  211528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:53:57.458612  211528 out.go:352] Setting JSON to false
	I0923 10:53:57.458641  211528 mustload.go:65] Loading cluster: multinode-261976
	I0923 10:53:57.458744  211528 notify.go:220] Checking for updates...
	I0923 10:53:57.459073  211528 config.go:182] Loaded profile config "multinode-261976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:53:57.459093  211528 status.go:174] checking status of multinode-261976 ...
	I0923 10:53:57.459483  211528 cli_runner.go:164] Run: docker container inspect multinode-261976 --format={{.State.Status}}
	I0923 10:53:57.477598  211528 status.go:364] multinode-261976 host status = "Running" (err=<nil>)
	I0923 10:53:57.477627  211528 host.go:66] Checking if "multinode-261976" exists ...
	I0923 10:53:57.477885  211528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-261976
	I0923 10:53:57.495408  211528 host.go:66] Checking if "multinode-261976" exists ...
	I0923 10:53:57.495726  211528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:53:57.495762  211528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-261976
	I0923 10:53:57.513671  211528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/multinode-261976/id_rsa Username:docker}
	I0923 10:53:57.603909  211528 ssh_runner.go:195] Run: systemctl --version
	I0923 10:53:57.608587  211528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:57.618623  211528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:53:57.670650  211528 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-23 10:53:57.661409426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:53:57.671255  211528 kubeconfig.go:125] found "multinode-261976" server: "https://192.168.67.2:8443"
	I0923 10:53:57.671282  211528 api_server.go:166] Checking apiserver status ...
	I0923 10:53:57.671324  211528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:53:57.681660  211528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2342/cgroup
	I0923 10:53:57.689810  211528 api_server.go:182] apiserver freezer: "4:freezer:/docker/df9af2170a230203b99d42c73a2645cecda419714b8e7e464395562e751dffea/kubepods/burstable/pod1e08a4d81325d9ba09bac691077386c9/7017349b198a8a853e1f71b948e5936bc2c240801bbfeeaf0a26221ee41537cb"
	I0923 10:53:57.689872  211528 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/df9af2170a230203b99d42c73a2645cecda419714b8e7e464395562e751dffea/kubepods/burstable/pod1e08a4d81325d9ba09bac691077386c9/7017349b198a8a853e1f71b948e5936bc2c240801bbfeeaf0a26221ee41537cb/freezer.state
	I0923 10:53:57.697142  211528 api_server.go:204] freezer state: "THAWED"
	I0923 10:53:57.697163  211528 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 10:53:57.702508  211528 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 10:53:57.702530  211528 status.go:456] multinode-261976 apiserver status = Running (err=<nil>)
	I0923 10:53:57.702539  211528 status.go:176] multinode-261976 status: &{Name:multinode-261976 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:53:57.702555  211528 status.go:174] checking status of multinode-261976-m02 ...
	I0923 10:53:57.702826  211528 cli_runner.go:164] Run: docker container inspect multinode-261976-m02 --format={{.State.Status}}
	I0923 10:53:57.721904  211528 status.go:364] multinode-261976-m02 host status = "Running" (err=<nil>)
	I0923 10:53:57.721927  211528 host.go:66] Checking if "multinode-261976-m02" exists ...
	I0923 10:53:57.722159  211528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-261976-m02
	I0923 10:53:57.738308  211528 host.go:66] Checking if "multinode-261976-m02" exists ...
	I0923 10:53:57.738576  211528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:53:57.738608  211528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-261976-m02
	I0923 10:53:57.754558  211528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19689-3716/.minikube/machines/multinode-261976-m02/id_rsa Username:docker}
	I0923 10:53:57.843355  211528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:57.853495  211528 status.go:176] multinode-261976-m02 status: &{Name:multinode-261976-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:53:57.853536  211528 status.go:174] checking status of multinode-261976-m03 ...
	I0923 10:53:57.853869  211528 cli_runner.go:164] Run: docker container inspect multinode-261976-m03 --format={{.State.Status}}
	I0923 10:53:57.870262  211528 status.go:364] multinode-261976-m03 host status = "Stopped" (err=<nil>)
	I0923 10:53:57.870288  211528 status.go:377] host is not running, skipping remaining checks
	I0923 10:53:57.870296  211528 status.go:176] multinode-261976-m03 status: &{Name:multinode-261976-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-261976 node start m03 -v=7 --alsologtostderr: (8.898288754s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-261976
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-261976
E0923 10:54:12.815409   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-261976: (22.334419179s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-261976 --wait=true -v=8 --alsologtostderr
E0923 10:55:35.876796   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-261976 --wait=true -v=8 --alsologtostderr: (1m20.461280497s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-261976
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-261976 node delete m03: (4.581770909s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-261976 stop: (21.063260155s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-261976 status: exit status 7 (85.323832ms)

                                                
                                                
-- stdout --
	multinode-261976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr: exit status 7 (79.770893ms)

                                                
                                                
-- stdout --
	multinode-261976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:56:16.624300  226943 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:56:16.624559  226943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:56:16.624569  226943 out.go:358] Setting ErrFile to fd 2...
	I0923 10:56:16.624573  226943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:56:16.624787  226943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3716/.minikube/bin
	I0923 10:56:16.624994  226943 out.go:352] Setting JSON to false
	I0923 10:56:16.625026  226943 mustload.go:65] Loading cluster: multinode-261976
	I0923 10:56:16.625150  226943 notify.go:220] Checking for updates...
	I0923 10:56:16.625500  226943 config.go:182] Loaded profile config "multinode-261976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:56:16.625519  226943 status.go:174] checking status of multinode-261976 ...
	I0923 10:56:16.625984  226943 cli_runner.go:164] Run: docker container inspect multinode-261976 --format={{.State.Status}}
	I0923 10:56:16.645752  226943 status.go:364] multinode-261976 host status = "Stopped" (err=<nil>)
	I0923 10:56:16.645799  226943 status.go:377] host is not running, skipping remaining checks
	I0923 10:56:16.645808  226943 status.go:176] multinode-261976 status: &{Name:multinode-261976 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:56:16.645846  226943 status.go:174] checking status of multinode-261976-m02 ...
	I0923 10:56:16.646223  226943 cli_runner.go:164] Run: docker container inspect multinode-261976-m02 --format={{.State.Status}}
	I0923 10:56:16.662247  226943 status.go:364] multinode-261976-m02 host status = "Stopped" (err=<nil>)
	I0923 10:56:16.662272  226943 status.go:377] host is not running, skipping remaining checks
	I0923 10:56:16.662280  226943 status.go:176] multinode-261976-m02 status: &{Name:multinode-261976-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-261976 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 10:56:54.599996   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-261976 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.695325823s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-261976 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-261976
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-261976-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-261976-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.123553ms)

                                                
                                                
-- stdout --
	* [multinode-261976-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-261976-m02' is duplicated with machine name 'multinode-261976-m02' in profile 'multinode-261976'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-261976-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-261976-m03 --driver=docker  --container-runtime=docker: (23.939516983s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-261976
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-261976: exit status 80 (256.447384ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-261976 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-261976-m03 already exists in multinode-261976-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-261976-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-261976-m03: (2.011498862s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.31s)

                                                
                                    
x
+
TestPreload (80.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-236927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0923 10:58:17.663264   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-236927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (50.128539645s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-236927 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-236927 image pull gcr.io/k8s-minikube/busybox: (1.395718129s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-236927
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-236927: (10.566786128s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-236927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-236927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (16.46638949s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-236927 image list
helpers_test.go:175: Cleaning up "test-preload-236927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-236927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-236927: (2.13427897s)
--- PASS: TestPreload (80.91s)

                                                
                                    
x
+
TestScheduledStopUnix (96.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-819056 --memory=2048 --driver=docker  --container-runtime=docker
E0923 10:59:12.814810   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-819056 --memory=2048 --driver=docker  --container-runtime=docker: (23.801616036s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-819056 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-819056 -n scheduled-stop-819056
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-819056 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 10:59:22.988295   10524 retry.go:31] will retry after 81.943µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.989460   10524 retry.go:31] will retry after 177.448µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.990601   10524 retry.go:31] will retry after 191.557µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.991753   10524 retry.go:31] will retry after 272.283µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.992885   10524 retry.go:31] will retry after 551.321µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.994021   10524 retry.go:31] will retry after 1.060596ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.995160   10524 retry.go:31] will retry after 597.708µs: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.996304   10524 retry.go:31] will retry after 2.190579ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:22.999523   10524 retry.go:31] will retry after 2.282921ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.002736   10524 retry.go:31] will retry after 4.87263ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.007959   10524 retry.go:31] will retry after 6.735309ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.015124   10524 retry.go:31] will retry after 10.047699ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.025253   10524 retry.go:31] will retry after 8.0318ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.033408   10524 retry.go:31] will retry after 18.070023ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
I0923 10:59:23.051557   10524 retry.go:31] will retry after 19.285555ms: open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/scheduled-stop-819056/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-819056 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-819056 -n scheduled-stop-819056
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-819056
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-819056 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-819056
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-819056: exit status 7 (60.402396ms)

                                                
                                                
-- stdout --
	scheduled-stop-819056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-819056 -n scheduled-stop-819056
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-819056 -n scheduled-stop-819056: exit status 7 (59.840089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-819056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-819056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-819056: (1.587807718s)
--- PASS: TestScheduledStopUnix (96.61s)

                                                
                                    
x
+
TestSkaffold (103.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1343853085 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-087030 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-087030 --memory=2600 --driver=docker  --container-runtime=docker: (25.330543421s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1343853085 run --minikube-profile skaffold-087030 --kube-context skaffold-087030 --status-check=true --port-forward=false --interactive=false
E0923 11:01:54.600544   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1343853085 run --minikube-profile skaffold-087030 --kube-context skaffold-087030 --status-check=true --port-forward=false --interactive=false: (1m3.619477352s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f5f6b57d9-lnhv4" [269d432e-940b-4b58-810a-7547749f49de] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003891088s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-646c6f5fdd-z2qcr" [87e34b2a-932b-414b-8bae-ea46ebc42363] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003827717s
helpers_test.go:175: Cleaning up "skaffold-087030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-087030
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-087030: (2.728898776s)
--- PASS: TestSkaffold (103.38s)

                                                
                                    
x
+
TestInsufficientStorage (12.39s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-972368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-972368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.286131834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b614045d-b775-4bad-86fb-6b5f898ca53c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-972368] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5c2d91d-93b3-4da0-b635-e873412b3076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"9c4fc424-5a77-40bf-be1b-cb42f0a1ac3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bc6f3df-82cd-43b1-a4c3-3a3746963265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig"}}
	{"specversion":"1.0","id":"4c4b084c-317e-4ded-b8b5-98b64bfde4de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube"}}
	{"specversion":"1.0","id":"892c73ea-f310-46dc-b564-d7f72601c1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f93d413e-783d-42b8-9da9-38c3316e91b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49886e46-6b40-4c0b-a7c3-359b619b471d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0a60d8d0-b482-4afe-b135-6adf9cc15c70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e90535bc-5dcd-4e5a-a0cd-61d2519078f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ced43355-691c-4336-a664-053c91f39404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bc2e1457-b92d-462b-a8f8-0618e958b323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-972368\" primary control-plane node in \"insufficient-storage-972368\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"06b71ebd-19df-4a44-9c21-66a248519e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fee8c537-54f6-43c5-a69b-32392445a466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fba1f2d-423a-49b7-9423-dbb306efa7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-972368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-972368 --output=json --layout=cluster: exit status 7 (248.16863ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-972368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-972368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:02:29.328550  266705 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-972368" does not appear in /home/jenkins/minikube-integration/19689-3716/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-972368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-972368 --output=json --layout=cluster: exit status 7 (254.465129ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-972368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-972368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:02:29.583336  266804 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-972368" does not appear in /home/jenkins/minikube-integration/19689-3716/kubeconfig
	E0923 11:02:29.592501  266804 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/insufficient-storage-972368/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-972368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-972368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-972368: (1.59592233s)
--- PASS: TestInsufficientStorage (12.39s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.610713523 start -p running-upgrade-746783 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.610713523 start -p running-upgrade-746783 --memory=2200 --vm-driver=docker  --container-runtime=docker: (27.507021396s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-746783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-746783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.045317684s)
helpers_test.go:175: Cleaning up "running-upgrade-746783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-746783
E0923 11:07:05.058121   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.064486   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.075847   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.097215   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.138717   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.220121   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:05.381611   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-746783: (2.069575987s)
--- PASS: TestRunningBinaryUpgrade (72.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (337.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 11:04:12.814856   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.89221304s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-222500
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-222500: (1.206631147s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-222500 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-222500 status --format={{.Host}}: exit status 7 (87.526311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m29.485101749s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-222500 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (101.557044ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-222500] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-222500
	    minikube start -p kubernetes-upgrade-222500 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2225002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-222500 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222500 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.860906645s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-222500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-222500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-222500: (2.350328044s)
--- PASS: TestKubernetesUpgrade (337.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3454455516 start -p missing-upgrade-269238 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3454455516 start -p missing-upgrade-269238 --memory=2200 --driver=docker  --container-runtime=docker: (1m18.702206739s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-269238
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-269238: (10.448290252s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-269238
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-269238 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-269238 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.83974961s)
helpers_test.go:175: Cleaning up "missing-upgrade-269238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-269238
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-269238: (2.131505044s)
--- PASS: TestMissingContainerUpgrade (145.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (79.152344ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-884234] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884234 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884234 --driver=docker  --container-runtime=docker: (29.699164979s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-884234 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --driver=docker  --container-runtime=docker: (14.4519479s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-884234 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-884234 status -o json: exit status 2 (283.790327ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-884234","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-884234
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-884234: (1.66908814s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884234 --no-kubernetes --driver=docker  --container-runtime=docker: (10.681482591s)
--- PASS: TestNoKubernetes/serial/Start (10.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.937965454 start -p stopped-upgrade-321814 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.937965454 start -p stopped-upgrade-321814 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m25.574816406s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.937965454 -p stopped-upgrade-321814 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.937965454 -p stopped-upgrade-321814 stop: (10.841147922s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-321814 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-321814 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.259624214s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-884234 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-884234 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.020174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-884234
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-884234: (1.167642371s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884234 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884234 --driver=docker  --container-runtime=docker: (6.814002708s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-884234 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-884234 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.067448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-321814
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-321814: (1.181676304s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (61.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-195938 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-195938 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m1.22457328s)
--- PASS: TestPause/serial/Start (61.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0923 11:06:54.600202   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m16.687754064s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0923 11:07:05.703397   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:06.345070   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:07.626924   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:10.188641   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:15.310919   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:07:25.552439   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (43.661173103s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-195938 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-195938 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.396365429s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-100833 "pgrep -a kubelet"
I0923 11:07:43.608276   10524 config.go:182] Loaded profile config "auto-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p864c" [5c2a0c63-fbf4-403b-a926-bd573cc1dcee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:07:46.034263   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-p864c" [5c2a0c63-fbf4-403b-a926-bd573cc1dcee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004928346s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dtzzd" [3f2d3c4f-ad29-4504-adee-6472a8c6a2a5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003475472s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-100833 "pgrep -a kubelet"
I0923 11:07:55.500020   10524 config.go:182] Loaded profile config "kindnet-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h4rbk" [7e62f32a-4704-49a7-a147-da9010906e87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h4rbk" [7e62f32a-4704-49a7-a147-da9010906e87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004145033s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-195938 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-195938 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-195938 --output=json --layout=cluster: exit status 2 (299.635717ms)

                                                
                                                
-- stdout --
	{"Name":"pause-195938","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-195938","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-195938 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-195938 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-195938 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-195938 --alsologtostderr -v=5: (2.073800443s)
--- PASS: TestPause/serial/DeletePaused (2.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-195938
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-195938: exit status 1 (22.621187ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-195938: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m13.448444904s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (44.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (44.10317482s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (44.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (74.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0923 11:08:26.995912   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m14.266007176s)
--- PASS: TestNetworkPlugins/group/false/Start (74.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-100833 "pgrep -a kubelet"
I0923 11:08:57.644854   10524 config.go:182] Loaded profile config "custom-flannel-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nfpbt" [bb1d794c-3c93-40ba-ae70-acdc0c0ff43b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nfpbt" [bb1d794c-3c93-40ba-ae70-acdc0c0ff43b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00405692s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m6.876229746s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4n4c4" [de2ac127-89a7-43f3-b216-5671d454cdfa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004790083s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-100833 "pgrep -a kubelet"
I0923 11:09:25.242482   10524 config.go:182] Loaded profile config "calico-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fbrfn" [119d2c8f-0be6-4919-8568-c611694de43b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fbrfn" [119d2c8f-0be6-4919-8568-c611694de43b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003716406s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (38.682192543s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-100833 "pgrep -a kubelet"
I0923 11:09:39.275789   10524 config.go:182] Loaded profile config "false-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nbmzl" [29642229-3d14-43b4-9095-eb6cb9a5af11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nbmzl" [29642229-3d14-43b4-9095-eb6cb9a5af11] Running
E0923 11:09:48.917735   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.006768455s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (44.197732895s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-100833 "pgrep -a kubelet"
I0923 11:10:07.097222   10524 config.go:182] Loaded profile config "bridge-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-glmn2" [ba3738e3-ed78-4f4d-b159-685c6e43bf62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-glmn2" [ba3738e3-ed78-4f4d-b159-685c6e43bf62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003308416s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-100833 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m7.797952701s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-100833 "pgrep -a kubelet"
I0923 11:10:23.638864   10524 config.go:182] Loaded profile config "enable-default-cni-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvjlw" [68a4fd15-4247-4b83-aea6-61bd71ab4dbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mvjlw" [68a4fd15-4247-4b83-aea6-61bd71ab4dbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003732558s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-939425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-939425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m44.394909034s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8jnwg" [238631af-741e-4b45-b2d4-ac33bd9cf8bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004009778s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-100833 "pgrep -a kubelet"
I0923 11:10:45.387065   10524 config.go:182] Loaded profile config "flannel-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z5fvd" [69d9b8fc-9353-4304-88d3-b60a0911c8f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z5fvd" [69d9b8fc-9353-4304-88d3-b60a0911c8f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003959336s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (41.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-219536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-219536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (41.964482019s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (41.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (37.446315086s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-100833 "pgrep -a kubelet"
I0923 11:11:17.185623   10524 config.go:182] Loaded profile config "kubenet-100833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-100833 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6r5fz" [4094e3b1-0813-4e32-b685-5c844a5517ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6r5fz" [4094e3b1-0813-4e32-b685-5c844a5517ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004859302s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-100833 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-100833 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-219536 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0fabfa0a-7069-4a1f-93c1-e5b8b73b4c59] Pending
helpers_test.go:344: "busybox" [0fabfa0a-7069-4a1f-93c1-e5b8b73b4c59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0fabfa0a-7069-4a1f-93c1-e5b8b73b4c59] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00457337s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-219536 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-219536 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-219536 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-558288 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-558288 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m6.906780892s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-219536 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-219536 --alsologtostderr -v=3: (10.778299206s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312165 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c545693a-74b9-4cf3-ba80-2e894b14cd74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 11:11:54.599919   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c545693a-74b9-4cf3-ba80-2e894b14cd74] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003791541s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312165 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-219536 -n no-preload-219536
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-219536 -n no-preload-219536: exit status 7 (154.339279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-219536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-219536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-219536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.070795004s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-219536 -n no-preload-219536
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-312165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-312165 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-312165 --alsologtostderr -v=3
E0923 11:12:05.058011   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-312165 --alsologtostderr -v=3: (10.899293572s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312165 -n embed-certs-312165
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312165 -n embed-certs-312165: exit status 7 (128.478364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-312165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:12:15.879030   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.697988367s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312165 -n embed-certs-312165
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-939425 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85d0da64-de25-40fc-b5d9-299c01e8741d] Pending
helpers_test.go:344: "busybox" [85d0da64-de25-40fc-b5d9-299c01e8741d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85d0da64-de25-40fc-b5d9-299c01e8741d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004069501s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-939425 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-939425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-939425 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-939425 --alsologtostderr -v=3
E0923 11:12:32.759788   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/skaffold-087030/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-939425 --alsologtostderr -v=3: (10.940291527s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-939425 -n old-k8s-version-939425
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-939425 -n old-k8s-version-939425: exit status 7 (75.245908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-939425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-939425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0923 11:12:43.915662   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:43.923049   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:43.934446   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:43.956010   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:43.997780   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:44.079284   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:44.241102   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:44.563397   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:45.204683   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:46.486283   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.048539   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.245158   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.251500   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.262894   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.284258   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.326446   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.408544   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.570801   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:49.892409   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:50.534608   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:51.816076   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:54.170606   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:54.377538   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-939425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m23.584941116s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-939425 -n old-k8s-version-939425
E0923 11:15:07.323609   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.330012   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.341358   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.363635   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.405061   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.488653   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-558288 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [505717ee-a019-473e-a9d7-8513f7844c6e] Pending
helpers_test.go:344: "busybox" [505717ee-a019-473e-a9d7-8513f7844c6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [505717ee-a019-473e-a9d7-8513f7844c6e] Running
E0923 11:12:59.499717   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00385321s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-558288 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-558288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0923 11:13:04.412552   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-558288 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-558288 --alsologtostderr -v=3
E0923 11:13:09.741296   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-558288 --alsologtostderr -v=3: (10.744091059s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288: exit status 7 (61.279139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-558288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-558288 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:13:24.894640   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:30.223265   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.836037   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.842407   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.853756   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.875782   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.917430   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:57.998863   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:58.160319   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:58.482451   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:13:59.124352   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:00.406639   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:02.968487   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:05.856777   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:08.090443   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:11.184703   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:12.815276   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/addons-071702/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:18.332108   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:18.976759   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:18.983128   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:18.994507   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:19.015958   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:19.057372   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:19.138988   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:19.300586   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:19.622393   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:20.264177   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:21.545605   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:24.107926   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:29.229373   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:38.813505   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.471062   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.491498   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.497859   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.509205   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.530506   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.571908   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.653316   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:39.814778   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:40.136353   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:40.778363   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:42.059936   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:44.621426   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:49.743430   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:57.665011   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/functional-001676/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:59.953111   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:59.985693   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-558288 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.338265249s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wbcxm" [056639b2-c7dd-411a-a092-dabd33a41422] Running
E0923 11:15:07.650261   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:07.971818   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:08.613470   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:09.895430   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:12.457619   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002966041s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wbcxm" [056639b2-c7dd-411a-a092-dabd33a41422] Running
E0923 11:15:17.579297   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004157178s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-939425 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-939425 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-939425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-939425 -n old-k8s-version-939425
E0923 11:15:19.774812   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/custom-flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-939425 -n old-k8s-version-939425: exit status 2 (286.552092ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-939425 -n old-k8s-version-939425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-939425 -n old-k8s-version-939425: exit status 2 (294.408042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-939425 --alsologtostderr -v=1
E0923 11:15:20.467237   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-939425 -n old-k8s-version-939425
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-939425 -n old-k8s-version-939425
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-814988 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:15:23.843149   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:23.849543   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:23.861841   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:23.883160   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:23.924565   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:24.006739   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:24.168115   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:24.490158   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:25.132282   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:26.413678   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:27.778810   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:27.821199   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:28.975265   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:33.106854   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:34.097345   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.099128   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.105574   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.116988   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.138413   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.179821   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.261393   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.423186   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:39.744883   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:40.387057   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:40.915010   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/calico-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:41.669075   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:44.231279   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:44.339004   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:48.302835   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:15:49.353165   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-814988 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (26.27295541s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-814988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-814988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.109787557s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-814988 --alsologtostderr -v=3
E0923 11:15:59.595331   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/flannel-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:01.428985   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/false-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-814988 --alsologtostderr -v=3: (10.738783558s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-814988 -n newest-cni-814988
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-814988 -n newest-cni-814988: exit status 7 (146.421534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-814988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-814988 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:16:04.820488   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-814988 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.675957503s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-814988 -n newest-cni-814988
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-814988 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-814988 --alsologtostderr -v=1
E0923 11:16:17.405245   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.411795   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.423772   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.446440   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.488073   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.570079   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:17.731583   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-814988 -n newest-cni-814988
E0923 11:16:18.053821   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-814988 -n newest-cni-814988: exit status 2 (287.241435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-814988 -n newest-cni-814988
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-814988 -n newest-cni-814988: exit status 2 (299.563611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-814988 --alsologtostderr -v=1
E0923 11:16:18.695469   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-814988 -n newest-cni-814988
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-814988 -n newest-cni-814988
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sphn6" [f2b48615-5d10-4d71-a680-92a6af39fd35] Running
E0923 11:16:22.539374   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:27.660676   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003850722s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sphn6" [f2b48615-5d10-4d71-a680-92a6af39fd35] Running
E0923 11:16:29.265044   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003468271s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-219536 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-219536 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-219536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-219536 -n no-preload-219536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-219536 -n no-preload-219536: exit status 2 (301.576225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-219536 -n no-preload-219536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-219536 -n no-preload-219536: exit status 2 (278.934602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-219536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-219536 -n no-preload-219536
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-219536 -n no-preload-219536
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gb56v" [8463122d-6899-47a4-970a-0978522dbd78] Running
E0923 11:16:37.902786   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004760911s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gb56v" [8463122d-6899-47a4-970a-0978522dbd78] Running
E0923 11:16:45.782748   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/enable-default-cni-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003651837s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-312165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-312165 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-312165 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312165 -n embed-certs-312165
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312165 -n embed-certs-312165: exit status 2 (278.585932ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312165 -n embed-certs-312165
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312165 -n embed-certs-312165: exit status 2 (284.177764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-312165 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312165 -n embed-certs-312165
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312165 -n embed-certs-312165
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zhnqc" [200b0490-db13-40f8-bf44-e3deec74bd36] Running
E0923 11:17:39.346140   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kubenet-100833/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:17:43.174799   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/old-k8s-version-939425/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:17:43.916061   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/auto-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00374709s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zhnqc" [200b0490-db13-40f8-bf44-e3deec74bd36] Running
E0923 11:17:49.244550   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/kindnet-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00361286s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-558288 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-558288 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-558288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288: exit status 2 (272.251618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288: exit status 2 (273.840112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-558288 --alsologtostderr -v=1
E0923 11:17:51.186491   10524 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3716/.minikube/profiles/bridge-100833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-558288 -n default-k8s-diff-port-558288
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-100833 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-100833" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-100833

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-100833" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-100833"

                                                
                                                
----------------------- debugLogs end: cilium-100833 [took: 3.892616248s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-100833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-100833
--- SKIP: TestNetworkPlugins/group/cilium (4.08s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-008536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-008536
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard