Test Report: Docker_Linux 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.51
x
+
TestAddons/parallel/Registry (73.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.929058ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kh4r9" [b2944da3-d9b7-4de7-8a57-f934ec8b2970] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002665734s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cnkhj" [e15cccd8-7fcb-48c9-9dc2-e79744e87759] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003371541s
addons_test.go:342: (dbg) Run:  kubectl --context addons-457129 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-457129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-457129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.075422024s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-457129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 ip
2024/09/18 19:51:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-457129
helpers_test.go:235: (dbg) docker inspect addons-457129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec",
	        "Created": "2024-09-18T19:38:45.341630422Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-18T19:38:45.475652246Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec/hosts",
	        "LogPath": "/var/lib/docker/containers/86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec/86ae18bdbb176a6a332ae7bb58a59c1a5a378dc69477093db6097b18df58d0ec-json.log",
	        "Name": "/addons-457129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-457129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-457129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/698e70b085bb54b196b1675e61d9cbaa57c45b260cc495c07302a96e38181855-init/diff:/var/lib/docker/overlay2/ea15ded7869e90879b7418dd3aef0d624c58276fe0ab3abf241b4159795e4858/diff",
	                "MergedDir": "/var/lib/docker/overlay2/698e70b085bb54b196b1675e61d9cbaa57c45b260cc495c07302a96e38181855/merged",
	                "UpperDir": "/var/lib/docker/overlay2/698e70b085bb54b196b1675e61d9cbaa57c45b260cc495c07302a96e38181855/diff",
	                "WorkDir": "/var/lib/docker/overlay2/698e70b085bb54b196b1675e61d9cbaa57c45b260cc495c07302a96e38181855/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-457129",
	                "Source": "/var/lib/docker/volumes/addons-457129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-457129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-457129",
	                "name.minikube.sigs.k8s.io": "addons-457129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2edb383f4d1008747778d15f5662b7df94ab0f97eaff60f0cb90cc7a7830b0dd",
	            "SandboxKey": "/var/run/docker/netns/2edb383f4d10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-457129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9b2a7ee7c6b1ab882558f3e266a434c02ddbcdde6cce62dbbfb8ff76dd9746cd",
	                    "EndpointID": "6485c6ab5197b94994de4a3c86171f209a04dc44211bbf1ade89f99b444d64e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-457129",
	                        "86ae18bdbb17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-457129 -n addons-457129
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-630296                                                                   | download-docker-630296 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-336155   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-336155                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40427                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-336155                                                                     | binary-mirror-336155   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-457129                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-457129                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-457129 --wait=true                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:42 UTC | 18 Sep 24 19:42 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | -p addons-457129                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-457129 ssh cat                                                                       | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | /opt/local-path-provisioner/pvc-6c5c3a13-dc76-4ea5-ae23-b00403f48891_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | addons-457129                                                                               |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | -p addons-457129                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-457129 addons                                                                        | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457129 addons                                                                        | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457129 addons                                                                        | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | addons-457129                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-457129 ssh curl -s                                                                   | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-457129 ip                                                                            | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-457129 ip                                                                            | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	| addons  | addons-457129 addons disable                                                                | addons-457129          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:23
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:23.569466   15685 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:23.569735   15685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:23.569745   15685 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:23.569749   15685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:23.569914   15685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 19:38:23.570482   15685 out.go:352] Setting JSON to false
	I0918 19:38:23.571265   15685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1250,"bootTime":1726687054,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:23.571352   15685 start.go:139] virtualization: kvm guest
	I0918 19:38:23.573522   15685 out.go:177] * [addons-457129] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:23.574820   15685 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:23.574831   15685 notify.go:220] Checking for updates...
	I0918 19:38:23.576372   15685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:23.577712   15685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:38:23.578842   15685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	I0918 19:38:23.580001   15685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:23.581091   15685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:23.582399   15685 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:23.604347   15685 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:38:23.604450   15685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:23.649500   15685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 19:38:23.64080336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:38:23.649606   15685 docker.go:318] overlay module found
	I0918 19:38:23.651500   15685 out.go:177] * Using the docker driver based on user configuration
	I0918 19:38:23.652684   15685 start.go:297] selected driver: docker
	I0918 19:38:23.652698   15685 start.go:901] validating driver "docker" against <nil>
	I0918 19:38:23.652707   15685 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:23.653470   15685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:23.699134   15685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 19:38:23.690640827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:38:23.699340   15685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:23.699582   15685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:23.701373   15685 out.go:177] * Using Docker driver with root privileges
	I0918 19:38:23.702907   15685 cni.go:84] Creating CNI manager for ""
	I0918 19:38:23.702968   15685 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:23.702979   15685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:23.703043   15685 start.go:340] cluster config:
	{Name:addons-457129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-457129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:23.704506   15685 out.go:177] * Starting "addons-457129" primary control-plane node in "addons-457129" cluster
	I0918 19:38:23.705945   15685 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:38:23.707470   15685 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:38:23.708774   15685 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:23.708815   15685 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0918 19:38:23.708823   15685 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:23.708864   15685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:38:23.708914   15685 preload.go:172] Found /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0918 19:38:23.708925   15685 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 19:38:23.709249   15685 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/config.json ...
	I0918 19:38:23.709276   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/config.json: {Name:mka729d6965602732b51ee9a521ac58b736578e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:23.725542   15685 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:38:23.725680   15685 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:38:23.725702   15685 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 19:38:23.725708   15685 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 19:38:23.725718   15685 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 19:38:23.725729   15685 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 19:38:35.829046   15685 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 19:38:35.829080   15685 cache.go:194] Successfully downloaded all kic artifacts
	I0918 19:38:35.829135   15685 start.go:360] acquireMachinesLock for addons-457129: {Name:mke4c12172cee324e5328d55e67a0eafaa50413d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:35.829234   15685 start.go:364] duration metric: took 74.658µs to acquireMachinesLock for "addons-457129"
	I0918 19:38:35.829261   15685 start.go:93] Provisioning new machine with config: &{Name:addons-457129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-457129 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:38:35.829369   15685 start.go:125] createHost starting for "" (driver="docker")
	I0918 19:38:35.831280   15685 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0918 19:38:35.831492   15685 start.go:159] libmachine.API.Create for "addons-457129" (driver="docker")
	I0918 19:38:35.831520   15685 client.go:168] LocalClient.Create starting
	I0918 19:38:35.831598   15685 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem
	I0918 19:38:36.101727   15685 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/cert.pem
	I0918 19:38:36.382495   15685 cli_runner.go:164] Run: docker network inspect addons-457129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 19:38:36.397710   15685 cli_runner.go:211] docker network inspect addons-457129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 19:38:36.397773   15685 network_create.go:284] running [docker network inspect addons-457129] to gather additional debugging logs...
	I0918 19:38:36.397790   15685 cli_runner.go:164] Run: docker network inspect addons-457129
	W0918 19:38:36.412413   15685 cli_runner.go:211] docker network inspect addons-457129 returned with exit code 1
	I0918 19:38:36.412444   15685 network_create.go:287] error running [docker network inspect addons-457129]: docker network inspect addons-457129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-457129 not found
	I0918 19:38:36.412457   15685 network_create.go:289] output of [docker network inspect addons-457129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-457129 not found
	
	** /stderr **
	I0918 19:38:36.412542   15685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:38:36.427766   15685 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a74720}
	I0918 19:38:36.427805   15685 network_create.go:124] attempt to create docker network addons-457129 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 19:38:36.427841   15685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-457129 addons-457129
	I0918 19:38:36.484531   15685 network_create.go:108] docker network addons-457129 192.168.49.0/24 created
	I0918 19:38:36.484557   15685 kic.go:121] calculated static IP "192.168.49.2" for the "addons-457129" container
	I0918 19:38:36.484609   15685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:38:36.498902   15685 cli_runner.go:164] Run: docker volume create addons-457129 --label name.minikube.sigs.k8s.io=addons-457129 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:38:36.515877   15685 oci.go:103] Successfully created a docker volume addons-457129
	I0918 19:38:36.515949   15685 cli_runner.go:164] Run: docker run --rm --name addons-457129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457129 --entrypoint /usr/bin/test -v addons-457129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0918 19:38:41.437774   15685 cli_runner.go:217] Completed: docker run --rm --name addons-457129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457129 --entrypoint /usr/bin/test -v addons-457129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.921784865s)
	I0918 19:38:41.437798   15685 oci.go:107] Successfully prepared a docker volume addons-457129
	I0918 19:38:41.437824   15685 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:41.437851   15685 kic.go:194] Starting extracting preloaded images to volume ...
	I0918 19:38:41.437921   15685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-457129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 19:38:45.282408   15685 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-457129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.844436932s)
	I0918 19:38:45.282442   15685 kic.go:203] duration metric: took 3.844589302s to extract preloaded images to volume ...
	W0918 19:38:45.282565   15685 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:38:45.282682   15685 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:38:45.327087   15685 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-457129 --name addons-457129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-457129 --network addons-457129 --ip 192.168.49.2 --volume addons-457129:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0918 19:38:45.645581   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Running}}
	I0918 19:38:45.662786   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:38:45.681025   15685 cli_runner.go:164] Run: docker exec addons-457129 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:38:45.726959   15685 oci.go:144] the created container "addons-457129" has a running status.
	I0918 19:38:45.726987   15685 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa...
	I0918 19:38:46.002939   15685 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:38:46.034517   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:38:46.053857   15685 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:38:46.053877   15685 kic_runner.go:114] Args: [docker exec --privileged addons-457129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:38:46.120401   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:38:46.139859   15685 machine.go:93] provisionDockerMachine start ...
	I0918 19:38:46.139967   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:46.155870   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:46.156127   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:46.156149   15685 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 19:38:46.296061   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457129
	
	I0918 19:38:46.296091   15685 ubuntu.go:169] provisioning hostname "addons-457129"
	I0918 19:38:46.296160   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:46.313542   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:46.313711   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:46.313725   15685 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-457129 && echo "addons-457129" | sudo tee /etc/hostname
	I0918 19:38:46.458628   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457129
	
	I0918 19:38:46.458725   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:46.475198   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:46.475363   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:46.475398   15685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-457129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-457129/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-457129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:38:46.608734   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:38:46.608766   15685 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7499/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7499/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7499/.minikube}
	I0918 19:38:46.608793   15685 ubuntu.go:177] setting up certificates
	I0918 19:38:46.608802   15685 provision.go:84] configureAuth start
	I0918 19:38:46.608852   15685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457129
	I0918 19:38:46.624741   15685 provision.go:143] copyHostCerts
	I0918 19:38:46.624820   15685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7499/.minikube/cert.pem (1123 bytes)
	I0918 19:38:46.624973   15685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7499/.minikube/key.pem (1679 bytes)
	I0918 19:38:46.625059   15685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7499/.minikube/ca.pem (1082 bytes)
	I0918 19:38:46.625131   15685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7499/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca-key.pem org=jenkins.addons-457129 san=[127.0.0.1 192.168.49.2 addons-457129 localhost minikube]
	I0918 19:38:46.680800   15685 provision.go:177] copyRemoteCerts
	I0918 19:38:46.680860   15685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:38:46.680920   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:46.697775   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:38:46.793131   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:38:46.814284   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:38:46.834547   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 19:38:46.854534   15685 provision.go:87] duration metric: took 245.719267ms to configureAuth
	I0918 19:38:46.854566   15685 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:38:46.854778   15685 config.go:182] Loaded profile config "addons-457129": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:38:46.854835   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:46.870949   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:46.871138   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:46.871152   15685 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 19:38:47.000938   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0918 19:38:47.000958   15685 ubuntu.go:71] root file system type: overlay
	I0918 19:38:47.001070   15685 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 19:38:47.001132   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:47.017517   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:47.017725   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:47.017820   15685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 19:38:47.158731   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 19:38:47.158806   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:47.175635   15685 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:47.175853   15685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:47.175877   15685 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 19:38:47.838436   15685 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-18 19:38:47.154509418 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0918 19:38:47.838462   15685 machine.go:96] duration metric: took 1.698573616s to provisionDockerMachine
	I0918 19:38:47.838473   15685 client.go:171] duration metric: took 12.006947819s to LocalClient.Create
	I0918 19:38:47.838487   15685 start.go:167] duration metric: took 12.006997743s to libmachine.API.Create "addons-457129"
	I0918 19:38:47.838493   15685 start.go:293] postStartSetup for "addons-457129" (driver="docker")
	I0918 19:38:47.838502   15685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:38:47.838544   15685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:38:47.838577   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:47.854338   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:38:47.949297   15685 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:38:47.952151   15685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:38:47.952179   15685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:38:47.952187   15685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:38:47.952193   15685 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 19:38:47.952203   15685 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7499/.minikube/addons for local assets ...
	I0918 19:38:47.952263   15685 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7499/.minikube/files for local assets ...
	I0918 19:38:47.952286   15685 start.go:296] duration metric: took 113.78694ms for postStartSetup
	I0918 19:38:47.952658   15685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457129
	I0918 19:38:47.969067   15685 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/config.json ...
	I0918 19:38:47.969307   15685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:38:47.969346   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:47.984314   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:38:48.073339   15685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:38:48.077048   15685 start.go:128] duration metric: took 12.247666146s to createHost
	I0918 19:38:48.077069   15685 start.go:83] releasing machines lock for "addons-457129", held for 12.247822412s
	I0918 19:38:48.077132   15685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457129
	I0918 19:38:48.092963   15685 ssh_runner.go:195] Run: cat /version.json
	I0918 19:38:48.093005   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:48.093055   15685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:38:48.093132   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:38:48.108746   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:38:48.109990   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:38:48.273433   15685 ssh_runner.go:195] Run: systemctl --version
	I0918 19:38:48.277469   15685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:38:48.281202   15685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 19:38:48.302726   15685 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:38:48.302785   15685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:38:48.326980   15685 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 19:38:48.327006   15685 start.go:495] detecting cgroup driver to use...
	I0918 19:38:48.327034   15685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:48.327122   15685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:48.340854   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 19:38:48.349601   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 19:38:48.358047   15685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 19:38:48.358095   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 19:38:48.366528   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:48.375027   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 19:38:48.383213   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:48.391480   15685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:38:48.399331   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 19:38:48.407762   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 19:38:48.416079   15685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 19:38:48.424649   15685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:38:48.431627   15685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:38:48.438598   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:48.510259   15685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 19:38:48.591524   15685 start.go:495] detecting cgroup driver to use...
	I0918 19:38:48.591573   15685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:48.591618   15685 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 19:38:48.603403   15685 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0918 19:38:48.603467   15685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 19:38:48.615719   15685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:48.630388   15685 ssh_runner.go:195] Run: which cri-dockerd
	I0918 19:38:48.633830   15685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 19:38:48.641889   15685 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 19:38:48.660561   15685 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 19:38:48.744514   15685 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 19:38:48.842218   15685 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 19:38:48.842357   15685 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 19:38:48.858718   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:48.938531   15685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 19:38:49.186983   15685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 19:38:49.197319   15685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:49.207395   15685 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 19:38:49.286590   15685 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 19:38:49.362493   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:49.438398   15685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 19:38:49.450034   15685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:49.459397   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:49.531120   15685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 19:38:49.590479   15685 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 19:38:49.590567   15685 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 19:38:49.594022   15685 start.go:563] Will wait 60s for crictl version
	I0918 19:38:49.594072   15685 ssh_runner.go:195] Run: which crictl
	I0918 19:38:49.597027   15685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:38:49.627421   15685 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 19:38:49.627481   15685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:49.650685   15685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:49.675764   15685 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 19:38:49.675843   15685 cli_runner.go:164] Run: docker network inspect addons-457129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:38:49.691945   15685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 19:38:49.695394   15685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:49.705021   15685 kubeadm.go:883] updating cluster {Name:addons-457129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-457129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:38:49.705122   15685 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:49.705163   15685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:49.722133   15685 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 19:38:49.722152   15685 docker.go:615] Images already preloaded, skipping extraction
	I0918 19:38:49.722214   15685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:49.740042   15685 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 19:38:49.740070   15685 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:38:49.740084   15685 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0918 19:38:49.740171   15685 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-457129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-457129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:38:49.740219   15685 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 19:38:49.782129   15685 cni.go:84] Creating CNI manager for ""
	I0918 19:38:49.782172   15685 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:49.782187   15685 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:38:49.782212   15685 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-457129 NodeName:addons-457129 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:38:49.782367   15685 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-457129"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:38:49.782433   15685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:38:49.790449   15685 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:38:49.790510   15685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:38:49.798113   15685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 19:38:49.813765   15685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:38:49.828935   15685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0918 19:38:49.844051   15685 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 19:38:49.846953   15685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:49.856075   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:49.928293   15685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:38:49.940485   15685 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129 for IP: 192.168.49.2
	I0918 19:38:49.940512   15685 certs.go:194] generating shared ca certs ...
	I0918 19:38:49.940530   15685 certs.go:226] acquiring lock for ca certs: {Name:mke16e4aeb0a19696e8eeda873787e346a3aedef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:49.940662   15685 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7499/.minikube/ca.key
	I0918 19:38:50.215011   15685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7499/.minikube/ca.crt ...
	I0918 19:38:50.215044   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/ca.crt: {Name:mke7570d54eb2a335e899bf4483c7f0c3ad906b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.215209   15685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7499/.minikube/ca.key ...
	I0918 19:38:50.215220   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/ca.key: {Name:mk4b313b658836d54e02bdf2cf120987af39599a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.215290   15685 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.key
	I0918 19:38:50.397960   15685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.crt ...
	I0918 19:38:50.397993   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.crt: {Name:mka480476b5617857b0fbf7151893a6910e8e832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.398186   15685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.key ...
	I0918 19:38:50.398201   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.key: {Name:mk17996b59c93b625ae6ff32125b5099ef8f8e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.398292   15685 certs.go:256] generating profile certs ...
	I0918 19:38:50.398347   15685 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.key
	I0918 19:38:50.398370   15685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt with IP's: []
	I0918 19:38:50.576553   15685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt ...
	I0918 19:38:50.576582   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: {Name:mk85a27d5b4acd7703a7376736958e4cff952462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.576765   15685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.key ...
	I0918 19:38:50.576778   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.key: {Name:mk393b0aa6dd1883bc7a2f069bf6fc8062dbec33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.576876   15685 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key.fa6788af
	I0918 19:38:50.576917   15685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt.fa6788af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0918 19:38:50.707997   15685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt.fa6788af ...
	I0918 19:38:50.708027   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt.fa6788af: {Name:mkf0afe480165b2f1d4a54167c97cac2b2c240fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.708205   15685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key.fa6788af ...
	I0918 19:38:50.708219   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key.fa6788af: {Name:mka1ab6cac40a4ef83bdeacc31672241483f72a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.708312   15685 certs.go:381] copying /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt.fa6788af -> /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt
	I0918 19:38:50.708387   15685 certs.go:385] copying /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key.fa6788af -> /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key
	I0918 19:38:50.708431   15685 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.key
	I0918 19:38:50.708448   15685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.crt with IP's: []
	I0918 19:38:50.949639   15685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.crt ...
	I0918 19:38:50.949667   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.crt: {Name:mkb4d4d2438a9a9ee90325c2bd1e2af985f610c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.949822   15685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.key ...
	I0918 19:38:50.949832   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.key: {Name:mk17bca11dccc60e1d889659c97451eeaccd6427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:50.950001   15685 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:38:50.950034   15685 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:38:50.950057   15685 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:38:50.950080   15685 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7499/.minikube/certs/key.pem (1679 bytes)
	I0918 19:38:50.950672   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:38:50.972043   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 19:38:50.993090   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:38:51.014045   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 19:38:51.036090   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:38:51.057857   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 19:38:51.079015   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:38:51.101088   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:38:51.122402   15685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7499/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:38:51.143906   15685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:38:51.159528   15685 ssh_runner.go:195] Run: openssl version
	I0918 19:38:51.164365   15685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:38:51.172458   15685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:51.175317   15685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:51.175361   15685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:51.181376   15685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:38:51.189826   15685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:38:51.192755   15685 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:38:51.192799   15685 kubeadm.go:392] StartCluster: {Name:addons-457129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-457129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:51.192909   15685 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 19:38:51.209644   15685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:38:51.217575   15685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:38:51.225734   15685 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0918 19:38:51.225784   15685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:38:51.233303   15685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:38:51.233323   15685 kubeadm.go:157] found existing configuration files:
	
	I0918 19:38:51.233358   15685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:38:51.240945   15685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:38:51.241008   15685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:38:51.247943   15685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:38:51.255143   15685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:38:51.255188   15685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:38:51.262452   15685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:38:51.270149   15685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:38:51.270201   15685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:38:51.277906   15685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:38:51.285416   15685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:38:51.285482   15685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:38:51.292664   15685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 19:38:51.324592   15685 kubeadm.go:310] W0918 19:38:51.323861    1925 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:51.325066   15685 kubeadm.go:310] W0918 19:38:51.324532    1925 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:51.346983   15685 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0918 19:38:51.396333   15685 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:00.061702   15685 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:39:00.061774   15685 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:39:00.061883   15685 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:39:00.061937   15685 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0918 19:39:00.061968   15685 kubeadm.go:310] OS: Linux
	I0918 19:39:00.062026   15685 kubeadm.go:310] CGROUPS_CPU: enabled
	I0918 19:39:00.062111   15685 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0918 19:39:00.062168   15685 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0918 19:39:00.062222   15685 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0918 19:39:00.062272   15685 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0918 19:39:00.062326   15685 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0918 19:39:00.062393   15685 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0918 19:39:00.062475   15685 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0918 19:39:00.062548   15685 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0918 19:39:00.062643   15685 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:39:00.062793   15685 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:39:00.062905   15685 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:39:00.062995   15685 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:39:00.064520   15685 out.go:235]   - Generating certificates and keys ...
	I0918 19:39:00.064606   15685 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:39:00.064683   15685 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:39:00.064771   15685 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:39:00.064851   15685 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:39:00.064965   15685 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:39:00.065041   15685 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:39:00.065120   15685 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:39:00.065250   15685 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-457129 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:39:00.065331   15685 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:39:00.065497   15685 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-457129 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:39:00.065573   15685 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:39:00.065657   15685 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:39:00.065737   15685 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:39:00.065829   15685 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:39:00.065915   15685 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:39:00.066005   15685 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:39:00.066092   15685 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:39:00.066191   15685 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:39:00.066263   15685 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:39:00.066389   15685 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:39:00.066467   15685 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:39:00.068128   15685 out.go:235]   - Booting up control plane ...
	I0918 19:39:00.068203   15685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:39:00.068295   15685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:39:00.068382   15685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:39:00.068472   15685 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:39:00.068555   15685 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:39:00.068599   15685 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:39:00.068718   15685 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:39:00.068817   15685 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:39:00.068871   15685 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.056589ms
	I0918 19:39:00.068964   15685 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:39:00.069015   15685 kubeadm.go:310] [api-check] The API server is healthy after 4.502057368s
	I0918 19:39:00.069105   15685 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:39:00.069219   15685 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:39:00.069269   15685 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:39:00.069446   15685 kubeadm.go:310] [mark-control-plane] Marking the node addons-457129 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:39:00.069532   15685 kubeadm.go:310] [bootstrap-token] Using token: hdrfj8.iw5e001tr9gn1zt0
	I0918 19:39:00.071170   15685 out.go:235]   - Configuring RBAC rules ...
	I0918 19:39:00.071297   15685 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:39:00.071369   15685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:39:00.071518   15685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:39:00.071670   15685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:39:00.071805   15685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:39:00.071918   15685 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:39:00.072067   15685 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:00.072117   15685 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:00.072157   15685 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:00.072163   15685 kubeadm.go:310] 
	I0918 19:39:00.072214   15685 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:00.072221   15685 kubeadm.go:310] 
	I0918 19:39:00.072283   15685 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:00.072289   15685 kubeadm.go:310] 
	I0918 19:39:00.072313   15685 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:00.072379   15685 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:00.072433   15685 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:00.072439   15685 kubeadm.go:310] 
	I0918 19:39:00.072483   15685 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:00.072492   15685 kubeadm.go:310] 
	I0918 19:39:00.072554   15685 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:00.072567   15685 kubeadm.go:310] 
	I0918 19:39:00.072639   15685 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:00.072747   15685 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:00.072850   15685 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:00.072863   15685 kubeadm.go:310] 
	I0918 19:39:00.072976   15685 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:00.073058   15685 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:00.073067   15685 kubeadm.go:310] 
	I0918 19:39:00.073143   15685 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hdrfj8.iw5e001tr9gn1zt0 \
	I0918 19:39:00.073237   15685 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:895a1a4b821247c3b3bc3d8674fdaf9ae4007075fb12e882893c62d1438babd8 \
	I0918 19:39:00.073273   15685 kubeadm.go:310] 	--control-plane 
	I0918 19:39:00.073282   15685 kubeadm.go:310] 
	I0918 19:39:00.073401   15685 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:00.073411   15685 kubeadm.go:310] 
	I0918 19:39:00.073523   15685 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hdrfj8.iw5e001tr9gn1zt0 \
	I0918 19:39:00.073680   15685 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:895a1a4b821247c3b3bc3d8674fdaf9ae4007075fb12e882893c62d1438babd8 
	I0918 19:39:00.073695   15685 cni.go:84] Creating CNI manager for ""
	I0918 19:39:00.073716   15685 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:39:00.075247   15685 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:00.076352   15685 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:00.084706   15685 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:00.100640   15685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:00.100700   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:00.100704   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-457129 minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-457129 minikube.k8s.io/primary=true
	I0918 19:39:00.209837   15685 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:00.209978   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:00.710131   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:01.210809   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:01.710745   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.210820   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.710989   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.210887   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.710018   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.210129   15685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.272501   15685 kubeadm.go:1113] duration metric: took 4.17185943s to wait for elevateKubeSystemPrivileges
	I0918 19:39:04.272541   15685 kubeadm.go:394] duration metric: took 13.079743528s to StartCluster
	I0918 19:39:04.272560   15685 settings.go:142] acquiring lock: {Name:mk761415fdfe0253120f9b1dbb6bb2fd172fca68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:04.272688   15685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:39:04.273162   15685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7499/kubeconfig: {Name:mk31083525ef7f1419e6532910512baf7d24e908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:04.273389   15685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:04.273418   15685 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:39:04.273474   15685 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:04.273590   15685 addons.go:69] Setting yakd=true in profile "addons-457129"
	I0918 19:39:04.273595   15685 addons.go:69] Setting gcp-auth=true in profile "addons-457129"
	I0918 19:39:04.273617   15685 addons.go:69] Setting cloud-spanner=true in profile "addons-457129"
	I0918 19:39:04.273628   15685 addons.go:69] Setting storage-provisioner=true in profile "addons-457129"
	I0918 19:39:04.273632   15685 addons.go:234] Setting addon cloud-spanner=true in "addons-457129"
	I0918 19:39:04.273639   15685 mustload.go:65] Loading cluster: addons-457129
	I0918 19:39:04.273647   15685 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-457129"
	I0918 19:39:04.273635   15685 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-457129"
	I0918 19:39:04.273658   15685 config.go:182] Loaded profile config "addons-457129": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:04.273666   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273671   15685 addons.go:69] Setting volcano=true in profile "addons-457129"
	I0918 19:39:04.273682   15685 addons.go:69] Setting ingress=true in profile "addons-457129"
	I0918 19:39:04.273688   15685 addons.go:234] Setting addon volcano=true in "addons-457129"
	I0918 19:39:04.273662   15685 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-457129"
	I0918 19:39:04.273706   15685 addons.go:69] Setting inspektor-gadget=true in profile "addons-457129"
	I0918 19:39:04.273721   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273728   15685 addons.go:69] Setting metrics-server=true in profile "addons-457129"
	I0918 19:39:04.273711   15685 addons.go:69] Setting default-storageclass=true in profile "addons-457129"
	I0918 19:39:04.273738   15685 addons.go:234] Setting addon metrics-server=true in "addons-457129"
	I0918 19:39:04.273762   15685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-457129"
	I0918 19:39:04.273770   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273833   15685 config.go:182] Loaded profile config "addons-457129": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:04.274030   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.274034   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.274074   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.274184   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.274202   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.274232   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.273672   15685 addons.go:69] Setting helm-tiller=true in profile "addons-457129"
	I0918 19:39:04.274401   15685 addons.go:234] Setting addon helm-tiller=true in "addons-457129"
	I0918 19:39:04.274442   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273692   15685 addons.go:234] Setting addon ingress=true in "addons-457129"
	I0918 19:39:04.274519   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273640   15685 addons.go:234] Setting addon storage-provisioner=true in "addons-457129"
	I0918 19:39:04.274666   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.274910   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.273611   15685 addons.go:234] Setting addon yakd=true in "addons-457129"
	I0918 19:39:04.275014   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.275017   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.273607   15685 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-457129"
	I0918 19:39:04.275307   15685 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-457129"
	I0918 19:39:04.275364   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.273620   15685 addons.go:69] Setting registry=true in profile "addons-457129"
	I0918 19:39:04.275423   15685 addons.go:69] Setting volumesnapshots=true in profile "addons-457129"
	I0918 19:39:04.273700   15685 addons.go:69] Setting ingress-dns=true in profile "addons-457129"
	I0918 19:39:04.275579   15685 addons.go:234] Setting addon ingress-dns=true in "addons-457129"
	I0918 19:39:04.275663   15685 addons.go:234] Setting addon registry=true in "addons-457129"
	I0918 19:39:04.275691   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.275800   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.273664   15685 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-457129"
	I0918 19:39:04.275895   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.276155   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.276347   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.276418   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.305442   15685 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:04.305740   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.275466   15685 addons.go:234] Setting addon volumesnapshots=true in "addons-457129"
	I0918 19:39:04.306731   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.307319   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.273721   15685 addons.go:234] Setting addon inspektor-gadget=true in "addons-457129"
	I0918 19:39:04.307506   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.307826   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.310809   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.311995   15685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:04.313080   15685 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-457129"
	I0918 19:39:04.313146   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.313696   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.318322   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.326224   15685 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:39:04.328163   15685 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:39:04.328186   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:39:04.328252   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.328741   15685 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:04.330198   15685 addons.go:234] Setting addon default-storageclass=true in "addons-457129"
	I0918 19:39:04.330237   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.330799   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:04.335513   15685 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:04.335673   15685 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:04.337163   15685 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:04.337228   15685 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:04.337240   15685 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:04.337242   15685 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:04.337256   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:04.337303   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.337322   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.339036   15685 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 19:39:04.339282   15685 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:04.339298   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:04.339355   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.342053   15685 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 19:39:04.344416   15685 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 19:39:04.349102   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:04.350742   15685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:04.352177   15685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:04.353507   15685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:04.355285   15685 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:04.355317   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:04.355372   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.356757   15685 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:04.356786   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 19:39:04.356849   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.364567   15685 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:04.365885   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:04.365914   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:04.366023   15685 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:04.366036   15685 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:04.366112   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.367623   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:04.367644   15685 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:04.367724   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.367849   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.369798   15685 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:04.370513   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:04.372672   15685 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:04.372733   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:04.375162   15685 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:04.375343   15685 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:04.375366   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:04.375467   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.376622   15685 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:04.376652   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:04.376708   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.376882   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:04.378271   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:04.398582   15685 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:04.398605   15685 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:04.398658   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.399399   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.400566   15685 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:04.401268   15685 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:04.401381   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:04.401718   15685 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:04.401724   15685 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:04.401815   15685 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:04.401883   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.403383   15685 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:04.403397   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:04.403439   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.403386   15685 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:04.403479   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:04.403525   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.408977   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:04.411423   15685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:04.415139   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:04.415165   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:04.415233   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:04.416092   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.416241   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.421652   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.437036   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.438471   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.452306   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.457051   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.458221   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.458505   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.460264   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.465918   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.467617   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:04.468330   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	W0918 19:39:04.506023   15685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 19:39:04.506062   15685 retry.go:31] will retry after 353.295676ms: ssh: handshake failed: EOF
	W0918 19:39:04.506180   15685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 19:39:04.506192   15685 retry.go:31] will retry after 209.760713ms: ssh: handshake failed: EOF
	W0918 19:39:04.506271   15685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 19:39:04.506278   15685 retry.go:31] will retry after 236.682396ms: ssh: handshake failed: EOF
	I0918 19:39:04.523778   15685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:04.705792   15685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:04.710358   15685 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:39:04.710436   15685 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:39:04.912763   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:04.923074   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:05.006330   15685 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:05.006360   15685 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:39:05.016987   15685 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:05.017014   15685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:05.022209   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:05.107830   15685 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:05.107914   15685 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:05.110900   15685 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:05.110981   15685 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:05.113168   15685 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:05.113234   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:05.123987   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:05.207086   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:05.209653   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:05.209905   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:05.212920   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:05.306930   15685 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:05.306970   15685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:05.313728   15685 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:05.313771   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:05.406833   15685 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:05.406861   15685 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:05.507324   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:05.607586   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:05.607680   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:05.706835   15685 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:05.706942   15685 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:05.712418   15685 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:05.712516   15685 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:05.728315   15685 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:05.728408   15685 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:05.809605   15685 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:05.809695   15685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:06.122239   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:06.220550   15685 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:06.220631   15685 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:06.405935   15685 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:06.406019   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:06.424161   15685 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.718276825s)
	I0918 19:39:06.424291   15685 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.900315277s)
	I0918 19:39:06.424453   15685 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:06.426516   15685 node_ready.go:35] waiting up to 6m0s for node "addons-457129" to be "Ready" ...
	I0918 19:39:06.508222   15685 node_ready.go:49] node "addons-457129" has status "Ready":"True"
	I0918 19:39:06.508315   15685 node_ready.go:38] duration metric: took 81.629275ms for node "addons-457129" to be "Ready" ...
	I0918 19:39:06.508343   15685 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:06.510213   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:06.510294   15685 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:06.518146   15685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:06.621412   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:06.621453   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:06.807006   15685 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:06.807034   15685 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:06.807069   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:06.817505   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:07.008255   15685 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-457129" context rescaled to 1 replicas
	I0918 19:39:07.214237   15685 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:07.214264   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:07.316208   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.403397567s)
	I0918 19:39:07.316350   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.393246592s)
	I0918 19:39:07.416928   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:07.417013   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:07.515403   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:07.715841   15685 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:07.715935   15685 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:08.223508   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:08.223590   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:08.525231   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:08.619872   15685 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:08.619945   15685 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:08.728794   15685 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:08.728864   15685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:08.919976   15685 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:08.920005   15685 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:09.423030   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:09.423069   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:09.605870   15685 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:09.605896   15685 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:10.023112   15685 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:10.023137   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:10.127434   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:10.127474   15685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:10.515195   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:10.526322   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:10.529651   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:10.529735   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:10.812803   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:10.812881   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:11.324698   15685 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:11.324809   15685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:11.413540   15685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:11.413649   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:11.439200   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:11.711862   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:12.210568   15685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:12.512583   15685 addons.go:234] Setting addon gcp-auth=true in "addons-457129"
	I0918 19:39:12.512663   15685 host.go:66] Checking if "addons-457129" exists ...
	I0918 19:39:12.513245   15685 cli_runner.go:164] Run: docker container inspect addons-457129 --format={{.State.Status}}
	I0918 19:39:12.537731   15685 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:12.537789   15685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457129
	I0918 19:39:12.556026   15685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/addons-457129/id_rsa Username:docker}
	I0918 19:39:12.607001   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:13.415984   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.393673352s)
	I0918 19:39:13.416247   15685 addons.go:475] Verifying addon ingress=true in "addons-457129"
	I0918 19:39:13.416282   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.209088607s)
	I0918 19:39:13.416199   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.292177688s)
	I0918 19:39:13.418627   15685 out.go:177] * Verifying ingress addon...
	I0918 19:39:13.421193   15685 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:13.428294   15685 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:13.428368   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:13.930065   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:14.425664   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:14.931258   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:15.107756   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:15.427306   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:16.023482   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:16.428409   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:16.926010   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.716048111s)
	I0918 19:39:16.926143   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.71640398s)
	I0918 19:39:16.926251   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.713245576s)
	I0918 19:39:16.926338   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.41892158s)
	I0918 19:39:16.926410   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.804080734s)
	I0918 19:39:16.926448   15685 addons.go:475] Verifying addon registry=true in "addons-457129"
	I0918 19:39:16.926958   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.109368338s)
	I0918 19:39:16.927150   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.411707352s)
	W0918 19:39:16.927176   15685 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:16.927195   15685 retry.go:31] will retry after 167.016922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:16.927276   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.411993921s)
	I0918 19:39:16.927392   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.119742326s)
	I0918 19:39:16.927438   15685 addons.go:475] Verifying addon metrics-server=true in "addons-457129"
	I0918 19:39:16.929137   15685 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-457129 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:16.929288   15685 out.go:177] * Verifying registry addon...
	I0918 19:39:16.930423   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:16.933653   15685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:17.008321   15685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:17.008346   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:17.094766   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:17.426295   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:17.508514   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:17.524483   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:18.009561   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:18.009903   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.424871   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.524321   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:18.807605   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.095684841s)
	I0918 19:39:18.807701   15685 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-457129"
	I0918 19:39:18.807750   15685 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.269987653s)
	I0918 19:39:18.810201   15685 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:18.810232   15685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:18.812232   15685 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:18.813160   15685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:18.813434   15685 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:18.813458   15685 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:18.817263   15685 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:18.817286   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:18.907910   15685 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:18.907939   15685 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:18.925557   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.935812   15685 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:18.935834   15685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:18.937387   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:19.018766   15685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:19.021044   15685 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f5gph" not found
	I0918 19:39:19.021075   15685 pod_ready.go:82] duration metric: took 12.502850895s for pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:19.021088   15685 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-f5gph" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f5gph" not found
	I0918 19:39:19.021099   15685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:19.318203   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.409657   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.31484694s)
	I0918 19:39:19.426511   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.507769   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:19.818610   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.926172   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.937333   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.306564   15685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.287749389s)
	I0918 19:39:20.308945   15685 addons.go:475] Verifying addon gcp-auth=true in "addons-457129"
	I0918 19:39:20.310415   15685 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:20.312390   15685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:20.315019   15685 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:20.317554   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.426228   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:20.437231   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.817495   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.924997   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:20.936803   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:21.026313   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:21.317531   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.424476   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.436640   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:21.817203   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.925032   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.937124   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.317034   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.425160   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:22.437235   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.817040   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.924655   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.024537   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.318093   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.427994   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.436808   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.526647   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:23.816802   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.925466   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.937511   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.317324   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.425582   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.436573   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.817205   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.925494   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.937900   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.317144   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.425133   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.437025   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.527293   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:25.817382   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.925275   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.937182   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.317094   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.425608   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.437916   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.817617   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.925101   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.937198   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.318655   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.425071   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.437106   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.527526   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:27.817205   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.926288   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.937171   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.318652   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.425158   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.437195   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.817329   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.926297   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.937559   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.318154   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.425861   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.437328   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.817673   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.925397   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.937382   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.027854   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:30.316832   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.425093   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.437164   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.816465   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.925649   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.936203   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.316732   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.425189   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.437354   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.816819   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.925287   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.937380   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.317206   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.425043   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.438435   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.527772   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:32.817691   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.925976   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.937410   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:33.317740   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.425363   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.437714   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:33.818083   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.929130   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.937750   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.317459   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.428122   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.437496   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.527872   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:34.817071   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.926238   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.937385   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.317139   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.425793   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.437149   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.817535   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.925504   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.937735   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.317471   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.425742   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:36.436647   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.817773   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.925088   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:36.937171   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.027317   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:37.317426   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.425364   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.437485   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.816966   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.925490   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.937479   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:38.316930   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.425321   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.437360   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:38.817352   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.925827   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.936757   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.317011   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.425337   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.437595   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.525937   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:39.816941   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.925480   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.936520   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.316852   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.425244   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.437321   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.817078   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.926078   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.025376   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:41.317473   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.425153   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.437288   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:41.526501   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:41.816671   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.925135   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.937108   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:42.317386   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.425515   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:42.436728   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:42.816928   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.925364   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:42.937120   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:43.316761   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.425077   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.437281   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:43.527191   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:43.816672   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.925471   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.936571   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:44.317585   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.425407   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.437466   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:44.816775   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.925799   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.937105   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:45.316935   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.425812   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.437013   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:45.817190   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.926132   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.937419   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.027638   15685 pod_ready.go:103] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:46.317367   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:46.425504   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.436392   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.881799   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.019391   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.019705   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.027674   15685 pod_ready.go:93] pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.027698   15685 pod_ready.go:82] duration metric: took 28.006586609s for pod "coredns-7c65d6cfc9-qw624" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.027709   15685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.031876   15685 pod_ready.go:93] pod "etcd-addons-457129" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.031900   15685 pod_ready.go:82] duration metric: took 4.18421ms for pod "etcd-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.031912   15685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.036257   15685 pod_ready.go:93] pod "kube-apiserver-addons-457129" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.036281   15685 pod_ready.go:82] duration metric: took 4.360162ms for pod "kube-apiserver-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.036292   15685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.040262   15685 pod_ready.go:93] pod "kube-controller-manager-addons-457129" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.040282   15685 pod_ready.go:82] duration metric: took 3.9838ms for pod "kube-controller-manager-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.040291   15685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xk9xc" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.044437   15685 pod_ready.go:93] pod "kube-proxy-xk9xc" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.044458   15685 pod_ready.go:82] duration metric: took 4.159459ms for pod "kube-proxy-xk9xc" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.044467   15685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.317677   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.425864   15685 pod_ready.go:93] pod "kube-scheduler-addons-457129" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:47.425885   15685 pod_ready.go:82] duration metric: took 381.412434ms for pod "kube-scheduler-addons-457129" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:47.425894   15685 pod_ready.go:39] duration metric: took 40.917530291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:47.425913   15685 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:39:47.426007   15685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:39:47.426544   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.437483   15685 kapi.go:107] duration metric: took 30.503831942s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:39:47.443366   15685 api_server.go:72] duration metric: took 43.169909962s to wait for apiserver process to appear ...
	I0918 19:39:47.443399   15685 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:39:47.443427   15685 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 19:39:47.447595   15685 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 19:39:47.448386   15685 api_server.go:141] control plane version: v1.31.1
	I0918 19:39:47.448410   15685 api_server.go:131] duration metric: took 5.003997ms to wait for apiserver health ...
	I0918 19:39:47.448417   15685 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:39:47.632187   15685 system_pods.go:59] 18 kube-system pods found
	I0918 19:39:47.632228   15685 system_pods.go:61] "coredns-7c65d6cfc9-qw624" [e4f7b79e-6ade-4491-924e-34e66190e129] Running
	I0918 19:39:47.632240   15685 system_pods.go:61] "csi-hostpath-attacher-0" [5df964db-0e1a-4ab2-8c1b-08ccefda59c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:47.632253   15685 system_pods.go:61] "csi-hostpath-resizer-0" [8d31632d-9428-40c2-bd75-48370ba9df30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:47.632268   15685 system_pods.go:61] "csi-hostpathplugin-gx9ps" [1b4aca2b-c7e0-4fd0-9635-1f3a31317460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:47.632279   15685 system_pods.go:61] "etcd-addons-457129" [e8af1cd4-abf2-4101-885a-5b00aef361d5] Running
	I0918 19:39:47.632289   15685 system_pods.go:61] "kube-apiserver-addons-457129" [96968776-c3eb-43ce-9ade-bcb3e2f1eae2] Running
	I0918 19:39:47.632294   15685 system_pods.go:61] "kube-controller-manager-addons-457129" [de5540b1-56e3-4596-b224-772b8a326f2b] Running
	I0918 19:39:47.632303   15685 system_pods.go:61] "kube-ingress-dns-minikube" [2a05d45a-a3e8-4472-b55d-3611ae4fae74] Running
	I0918 19:39:47.632308   15685 system_pods.go:61] "kube-proxy-xk9xc" [684cdba6-dbb9-49b9-aeb9-718120abba98] Running
	I0918 19:39:47.632315   15685 system_pods.go:61] "kube-scheduler-addons-457129" [7358eba4-20af-43f9-abb7-0f241b859124] Running
	I0918 19:39:47.632321   15685 system_pods.go:61] "metrics-server-84c5f94fbc-h4fl2" [a77387ba-6450-4ffe-9aa7-de0bf96f74da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:47.632327   15685 system_pods.go:61] "nvidia-device-plugin-daemonset-5p5wt" [b9ee4eb2-3471-4a3f-83b8-cd8dabafe83c] Running
	I0918 19:39:47.632331   15685 system_pods.go:61] "registry-66c9cd494c-kh4r9" [b2944da3-d9b7-4de7-8a57-f934ec8b2970] Running
	I0918 19:39:47.632338   15685 system_pods.go:61] "registry-proxy-cnkhj" [e15cccd8-7fcb-48c9-9dc2-e79744e87759] Running
	I0918 19:39:47.632345   15685 system_pods.go:61] "snapshot-controller-56fcc65765-pw2mr" [17a077f4-27f7-4a22-95e4-47ed4cc7f9e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:47.632362   15685 system_pods.go:61] "snapshot-controller-56fcc65765-s87zp" [4de05312-3887-4578-b327-e1d6e6ead173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:47.632371   15685 system_pods.go:61] "storage-provisioner" [c887feeb-fb7b-495f-a43c-fca8ac121bca] Running
	I0918 19:39:47.632378   15685 system_pods.go:61] "tiller-deploy-b48cc5f79-k64kf" [b79b5bb4-d211-4aa6-9551-5d2305acc2b2] Running
	I0918 19:39:47.632390   15685 system_pods.go:74] duration metric: took 183.965128ms to wait for pod list to return data ...
	I0918 19:39:47.632403   15685 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:39:47.817809   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.824845   15685 default_sa.go:45] found service account: "default"
	I0918 19:39:47.824867   15685 default_sa.go:55] duration metric: took 192.456396ms for default service account to be created ...
	I0918 19:39:47.824877   15685 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:39:47.925564   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.030275   15685 system_pods.go:86] 18 kube-system pods found
	I0918 19:39:48.030303   15685 system_pods.go:89] "coredns-7c65d6cfc9-qw624" [e4f7b79e-6ade-4491-924e-34e66190e129] Running
	I0918 19:39:48.030312   15685 system_pods.go:89] "csi-hostpath-attacher-0" [5df964db-0e1a-4ab2-8c1b-08ccefda59c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:48.030318   15685 system_pods.go:89] "csi-hostpath-resizer-0" [8d31632d-9428-40c2-bd75-48370ba9df30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:48.030327   15685 system_pods.go:89] "csi-hostpathplugin-gx9ps" [1b4aca2b-c7e0-4fd0-9635-1f3a31317460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:48.030332   15685 system_pods.go:89] "etcd-addons-457129" [e8af1cd4-abf2-4101-885a-5b00aef361d5] Running
	I0918 19:39:48.030336   15685 system_pods.go:89] "kube-apiserver-addons-457129" [96968776-c3eb-43ce-9ade-bcb3e2f1eae2] Running
	I0918 19:39:48.030339   15685 system_pods.go:89] "kube-controller-manager-addons-457129" [de5540b1-56e3-4596-b224-772b8a326f2b] Running
	I0918 19:39:48.030343   15685 system_pods.go:89] "kube-ingress-dns-minikube" [2a05d45a-a3e8-4472-b55d-3611ae4fae74] Running
	I0918 19:39:48.030346   15685 system_pods.go:89] "kube-proxy-xk9xc" [684cdba6-dbb9-49b9-aeb9-718120abba98] Running
	I0918 19:39:48.030350   15685 system_pods.go:89] "kube-scheduler-addons-457129" [7358eba4-20af-43f9-abb7-0f241b859124] Running
	I0918 19:39:48.030355   15685 system_pods.go:89] "metrics-server-84c5f94fbc-h4fl2" [a77387ba-6450-4ffe-9aa7-de0bf96f74da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:48.030359   15685 system_pods.go:89] "nvidia-device-plugin-daemonset-5p5wt" [b9ee4eb2-3471-4a3f-83b8-cd8dabafe83c] Running
	I0918 19:39:48.030362   15685 system_pods.go:89] "registry-66c9cd494c-kh4r9" [b2944da3-d9b7-4de7-8a57-f934ec8b2970] Running
	I0918 19:39:48.030385   15685 system_pods.go:89] "registry-proxy-cnkhj" [e15cccd8-7fcb-48c9-9dc2-e79744e87759] Running
	I0918 19:39:48.030391   15685 system_pods.go:89] "snapshot-controller-56fcc65765-pw2mr" [17a077f4-27f7-4a22-95e4-47ed4cc7f9e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:48.030396   15685 system_pods.go:89] "snapshot-controller-56fcc65765-s87zp" [4de05312-3887-4578-b327-e1d6e6ead173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:48.030400   15685 system_pods.go:89] "storage-provisioner" [c887feeb-fb7b-495f-a43c-fca8ac121bca] Running
	I0918 19:39:48.030403   15685 system_pods.go:89] "tiller-deploy-b48cc5f79-k64kf" [b79b5bb4-d211-4aa6-9551-5d2305acc2b2] Running
	I0918 19:39:48.030409   15685 system_pods.go:126] duration metric: took 205.526569ms to wait for k8s-apps to be running ...
	I0918 19:39:48.030416   15685 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:39:48.030461   15685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:39:48.041327   15685 system_svc.go:56] duration metric: took 10.90412ms WaitForService to wait for kubelet
	I0918 19:39:48.041351   15685 kubeadm.go:582] duration metric: took 43.767903151s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:39:48.041367   15685 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:39:48.225997   15685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0918 19:39:48.226028   15685 node_conditions.go:123] node cpu capacity is 8
	I0918 19:39:48.226042   15685 node_conditions.go:105] duration metric: took 184.671252ms to run NodePressure ...
	I0918 19:39:48.226056   15685 start.go:241] waiting for startup goroutines ...
	I0918 19:39:48.317203   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.425562   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.817286   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.924980   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.316991   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.425323   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.817665   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.924742   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.318618   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.425298   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.817542   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.925352   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.317018   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.425675   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.816946   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.925199   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.317239   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.425530   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.816484   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.925016   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.316973   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.425628   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.817865   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.925988   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.317636   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.425033   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.817592   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.925317   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.380950   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.425274   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.817122   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.924881   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.317556   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.425864   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.817160   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.924904   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.317782   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.432705   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.817594   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.925185   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.318573   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.425384   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.816692   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.925131   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.317649   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.424864   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.817330   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.925175   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.318533   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.426047   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.817610   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.925393   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.317644   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.533477   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.817259   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.925179   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.317218   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.424606   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.817535   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.924515   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.317297   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.425708   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.829461   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.925603   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.317114   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.426122   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.817257   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.925302   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.317618   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.424810   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.817341   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.925280   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.317224   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.425929   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.817152   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.938628   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.317285   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.425142   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.817602   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.925504   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.317005   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.425844   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.817563   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.925337   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.317234   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.425239   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.817376   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.925306   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.317584   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.425565   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.819798   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.925798   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.317949   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.426105   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.817355   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.925595   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.317885   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.425890   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.817513   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.924454   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.317910   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.425731   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.817213   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.925753   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.317811   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.425976   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.817753   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.924849   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.317302   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.479102   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.817018   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.925731   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.317281   15685 kapi.go:107] duration metric: took 57.504122489s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:40:16.424712   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.924965   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.425548   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.925212   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.426717   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.925591   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.503186   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.924727   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.425927   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.925302   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.425991   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.931214   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.426406   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.924766   15685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.425220   15685 kapi.go:107] duration metric: took 1m10.00402563s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:42.815384   15685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:40:42.815407   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.315513   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.815813   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.316000   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.815324   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.315381   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.815191   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.316743   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.815902   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.315953   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.816244   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.316671   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.815666   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.315391   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.815536   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.316433   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.815545   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.315413   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.815737   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.315862   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.815862   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.316021   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.815588   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.315555   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.815614   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.315364   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.815307   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.315608   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.815100   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.316029   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.815688   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.315609   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.815742   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.315603   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.815310   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.315614   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.815718   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.315639   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.815989   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.316119   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.816382   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.315233   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.815663   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.315942   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.816235   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.316136   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.816100   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:06.316659   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:06.815284   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:07.316220   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:07.815164   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:08.316272   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:08.815474   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:09.315726   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:09.815341   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:10.315573   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:10.815585   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:11.315252   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:11.816158   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:12.316521   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:12.816065   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:13.315955   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:13.815509   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:14.315915   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:14.815993   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:15.315954   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:15.815742   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:16.316098   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:16.816275   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:17.315893   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:17.815674   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:18.315932   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:18.816375   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:19.315194   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:19.816150   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:20.316170   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:20.816715   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:21.315420   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:21.815266   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:22.316663   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:22.817524   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:23.315257   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:23.816378   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:24.315147   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:24.815934   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:25.315975   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:25.815907   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:26.315517   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:26.815814   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:27.315429   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:27.815261   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:28.316209   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:28.815449   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:29.315573   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:29.815204   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:30.316451   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:30.815758   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:31.315478   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:31.816325   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:32.315808   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:32.815784   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:33.316156   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:33.815596   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:34.316116   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:34.815730   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:35.315689   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:35.815064   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:36.316136   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:36.816087   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:37.315626   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:37.815431   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:38.315843   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:38.816029   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:39.316158   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:39.816171   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:40.316278   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:40.816766   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:41.315707   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:41.815639   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:42.315965   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:42.815864   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:43.316108   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:43.815576   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:44.315817   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:44.815754   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:45.315576   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:45.815324   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:46.315654   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:46.816001   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:47.315417   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:47.815141   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:48.316283   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:48.815591   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:49.315257   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:49.815653   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:50.315567   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:50.815629   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:51.347189   15685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:51.816224   15685 kapi.go:107] duration metric: took 2m31.503832805s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:41:51.818182   15685 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-457129 cluster.
	I0918 19:41:51.819682   15685 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:41:51.821239   15685 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:41:51.822700   15685 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, ingress-dns, storage-provisioner-rancher, volcano, nvidia-device-plugin, helm-tiller, storage-provisioner, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0918 19:41:51.824006   15685 addons.go:510] duration metric: took 2m47.550538049s for enable addons: enabled=[cloud-spanner default-storageclass ingress-dns storage-provisioner-rancher volcano nvidia-device-plugin helm-tiller storage-provisioner inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0918 19:41:51.824052   15685 start.go:246] waiting for cluster config update ...
	I0918 19:41:51.824085   15685 start.go:255] writing updated cluster config ...
	I0918 19:41:51.824346   15685 ssh_runner.go:195] Run: rm -f paused
	I0918 19:41:51.873628   15685 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:41:51.875457   15685 out.go:177] * Done! kubectl is now configured to use "addons-457129" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 18 19:51:24 addons-457129 dockerd[1341]: time="2024-09-18T19:51:24.707589739Z" level=info msg="ignoring event" container=5e8bc0e951e4a4091e6df05ae23b8858bb442b1149f1d65f656be6e8f55161d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:24 addons-457129 dockerd[1341]: time="2024-09-18T19:51:24.740829174Z" level=info msg="ignoring event" container=401004fdf6d8cea80dd8d820bf26f23c4343202d6b894026dc05f0d2d571035d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:25 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d946c75d6e81ec3b1e42ff327e03b8814897aa2196fa0b88cf77ebba88a9924/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 18 19:51:27 addons-457129 dockerd[1341]: time="2024-09-18T19:51:27.406559505Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=259f7e08b743ef82b7bcc42c5e95352ff0569be51f44ef09e620ef17be4d95eb
	Sep 18 19:51:27 addons-457129 dockerd[1341]: time="2024-09-18T19:51:27.434397754Z" level=info msg="ignoring event" container=259f7e08b743ef82b7bcc42c5e95352ff0569be51f44ef09e620ef17be4d95eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:27 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"local-path-provisioner-86d989889c-4bk8b_local-path-storage\": unexpected command output nsenter: cannot open /proc/4059/ns/net: No such file or directory\n with error: exit status 1"
	Sep 18 19:51:27 addons-457129 dockerd[1341]: time="2024-09-18T19:51:27.645652950Z" level=info msg="ignoring event" container=70e3de0610cb123de7a8049fd83a80f3bac5e5f3f0463ab93b65e72ac59dafe0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:27 addons-457129 dockerd[1341]: time="2024-09-18T19:51:27.930917542Z" level=info msg="ignoring event" container=41ac19e47011d188367b8bfdae02623a4ecde0e107c309891e9b46be24c37425 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:28 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:28Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 18 19:51:28 addons-457129 dockerd[1341]: time="2024-09-18T19:51:28.983186430Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:51:28 addons-457129 dockerd[1341]: time="2024-09-18T19:51:28.985554417Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:51:36 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ff558cf262610bc15ebe8a8a6232c203b18cbbcff2334120a5bbe8ffa6bd51d5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 18 19:51:37 addons-457129 dockerd[1341]: time="2024-09-18T19:51:37.293622066Z" level=info msg="ignoring event" container=b322925f6b0825948d970e7f820a5df93e1fc6490b13db0a3dcfe0d1702c304d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:37 addons-457129 dockerd[1341]: time="2024-09-18T19:51:37.342124932Z" level=info msg="ignoring event" container=ab4cf8bc40d313df7fb7cca524a351535409277f536ab063ef4d2596935460c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:38 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:38Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 18 19:51:41 addons-457129 dockerd[1341]: time="2024-09-18T19:51:41.564963998Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960
	Sep 18 19:51:41 addons-457129 dockerd[1341]: time="2024-09-18T19:51:41.630372382Z" level=info msg="ignoring event" container=df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:41 addons-457129 dockerd[1341]: time="2024-09-18T19:51:41.766086673Z" level=info msg="ignoring event" container=6b593e84835c84e72fa379fb1ccddd2d5bfdf994aaf91012f54137a21da458fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:46 addons-457129 dockerd[1341]: time="2024-09-18T19:51:46.472872015Z" level=info msg="ignoring event" container=0f8d0d72f3c85439a030c543a9e8ce244636ee3d0aed98955322c358c79714d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:46 addons-457129 dockerd[1341]: time="2024-09-18T19:51:46.954959018Z" level=info msg="ignoring event" container=4c486a2ba62d5383cdd72143b513fdb49214075aa93a3bd189fb34a1beaa9db0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:47 addons-457129 dockerd[1341]: time="2024-09-18T19:51:47.019137879Z" level=info msg="ignoring event" container=5470624e82176db770c949f73a99c574ae9a1480198b8a9af84163161198b0e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:47 addons-457129 dockerd[1341]: time="2024-09-18T19:51:47.140728783Z" level=info msg="ignoring event" container=771592a28b29e4c807fdd12471c05d9498f9c2f847f1ad3a6f3869d3be91b168 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:47 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-66c9cd494c-kh4r9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 18 19:51:47 addons-457129 cri-dockerd[1605]: time="2024-09-18T19:51:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-cnkhj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 18 19:51:47 addons-457129 dockerd[1341]: time="2024-09-18T19:51:47.210491602Z" level=info msg="ignoring event" container=29ac5773e878d1951d033f111e09a98ec346a63ef47f1ca9b3a5c5c3366c708b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b196be8aab02a       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  9 seconds ago       Running             hello-world-app           0                   ff558cf262610       hello-world-app-55bf9c44b4-vdlrw
	308a947d4f51a       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                19 seconds ago      Running             nginx                     0                   2d946c75d6e81       nginx
	647589918e7d3       a416a98b71e22                                                                                                                50 seconds ago      Exited              helper-pod                0                   cc6036ab2cf89       helper-pod-delete-pvc-6c5c3a13-dc76-4ea5-ae23-b00403f48891
	281e449b44a46       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   867a8bdf3438b       gcp-auth-89d5ffd79-svg79
	2b15be079b6e6       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   72fc6c00fc7b1       ingress-nginx-admission-patch-j8ftn
	0b315ce69998d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   c6c2cc5bc26e9       ingress-nginx-admission-create-xtm5n
	5470624e82176       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   29ac5773e878d       registry-proxy-cnkhj
	4c486a2ba62d5       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   771592a28b29e       registry-66c9cd494c-kh4r9
	2158d332c2787       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   862e35b78e5db       storage-provisioner
	520375bbab8c8       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   f7ce44fe4c3d7       coredns-7c65d6cfc9-qw624
	48554c8d2b4c3       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   8ec769d31087d       kube-proxy-xk9xc
	b53b2c10175a3       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   8839cc09d838e       kube-controller-manager-addons-457129
	90ef02d186701       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   826160e2b63a0       kube-scheduler-addons-457129
	706c4839867d6       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   ad2ee4620ea4a       etcd-addons-457129
	9a08629dca4d5       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   e1ac926cf3599       kube-apiserver-addons-457129
	
	
	==> coredns [520375bbab8c] <==
	[INFO] 10.244.0.22:53549 - 39122 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.066222524s
	[INFO] 10.244.0.22:55262 - 43737 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.066317318s
	[INFO] 10.244.0.22:41193 - 26256 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007456s
	[INFO] 10.244.0.22:48521 - 5664 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005061844s
	[INFO] 10.244.0.22:59634 - 54805 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005947442s
	[INFO] 10.244.0.22:38386 - 64945 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006511268s
	[INFO] 10.244.0.22:41454 - 59725 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006191497s
	[INFO] 10.244.0.22:35656 - 63415 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006348565s
	[INFO] 10.244.0.22:53549 - 11700 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006280856s
	[INFO] 10.244.0.22:55262 - 47043 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006324048s
	[INFO] 10.244.0.22:48521 - 14502 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004489403s
	[INFO] 10.244.0.22:53549 - 46889 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005352785s
	[INFO] 10.244.0.22:35656 - 3931 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004623446s
	[INFO] 10.244.0.22:48521 - 14716 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001996261s
	[INFO] 10.244.0.22:41454 - 40135 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004213724s
	[INFO] 10.244.0.22:59634 - 23446 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005434644s
	[INFO] 10.244.0.22:53549 - 30411 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078176s
	[INFO] 10.244.0.22:38386 - 49884 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005692955s
	[INFO] 10.244.0.22:41454 - 57892 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010086s
	[INFO] 10.244.0.22:35656 - 48392 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059121s
	[INFO] 10.244.0.22:48521 - 38680 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064209s
	[INFO] 10.244.0.22:38386 - 31313 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040068s
	[INFO] 10.244.0.22:59634 - 33788 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069708s
	[INFO] 10.244.0.22:55262 - 43125 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004802538s
	[INFO] 10.244.0.22:55262 - 12964 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076087s
	
	
	==> describe nodes <==
	Name:               addons-457129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-457129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-457129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-457129
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:38:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-457129
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:51:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:51:33 +0000   Wed, 18 Sep 2024 19:38:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:51:33 +0000   Wed, 18 Sep 2024 19:38:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:51:33 +0000   Wed, 18 Sep 2024 19:38:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:51:33 +0000   Wed, 18 Sep 2024 19:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-457129
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 672a2153f81848b0b6aa92e77d87fdcb
	  System UUID:                a546e2b2-4d4f-424e-84c6-7e85755f65c3
	  Boot ID:                    d3463f46-6a21-414a-b4ed-44cb759d1998
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-vdlrw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  gcp-auth                    gcp-auth-89d5ffd79-svg79                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-qw624                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-457129                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-457129             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-457129    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xk9xc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-457129             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-457129 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-457129 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-457129 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-457129 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-457129 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-457129 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-457129 event: Registered Node addons-457129 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 78 8b 1e 9b 05 08 06
	[  +2.987014] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 40 e6 e6 d8 8a 08 06
	[  +6.085908] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 2e de 0a 23 fd 08 06
	[  +0.162722] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 64 95 17 7d b3 08 06
	[  +0.187511] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 88 c3 ed 54 5f 08 06
	[  +8.959933] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a c9 dc 3e c3 0e 08 06
	[Sep18 19:41] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 bd b0 c3 ad b7 08 06
	[  +0.082010] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 de 04 87 a8 70 08 06
	[ +27.408104] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 a0 9f 7a 90 e3 08 06
	[  +0.000449] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 3c 9f 25 06 d5 08 06
	[Sep18 19:50] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b6 62 f6 bb 32 08 06
	[Sep18 19:51] IPv4: martian source 10.244.0.37 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a c9 dc 3e c3 0e 08 06
	[  +1.743433] IPv4: martian source 10.244.0.22 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 3c 9f 25 06 d5 08 06
	
	
	==> etcd [706c4839867d] <==
	{"level":"info","ts":"2024-09-18T19:38:55.810610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T19:38:55.810649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-18T19:38:55.810668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:55.810674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:55.810682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:55.810689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:55.811748Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-457129 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T19:38:55.811807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:55.811830Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:55.811906Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:55.811949Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:55.811966Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:55.812673Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:55.812713Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:55.812763Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:55.812832Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:55.812763Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:55.814187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T19:38:55.814264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-18T19:39:09.910437Z","caller":"traceutil/trace.go:171","msg":"trace[1408545621] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"103.285543ms","start":"2024-09-18T19:39:09.807118Z","end":"2024-09-18T19:39:09.910403Z","steps":["trace[1408545621] 'process raft request'  (duration: 99.869111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:01.530764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.008815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:01.530840Z","caller":"traceutil/trace.go:171","msg":"trace[893106699] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1177; }","duration":"108.107721ms","start":"2024-09-18T19:40:01.422721Z","end":"2024-09-18T19:40:01.530829Z","steps":["trace[893106699] 'range keys from in-memory index tree'  (duration: 107.946532ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:48:55.833077Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1913}
	{"level":"info","ts":"2024-09-18T19:48:55.859495Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1913,"took":"25.892845ms","hash":684421458,"current-db-size-bytes":9154560,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":5021696,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-18T19:48:55.859543Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":684421458,"revision":1913,"compact-revision":-1}
	
	
	==> gcp-auth [281e449b44a4] <==
	2024/09/18 19:42:32 Ready to write response ...
	2024/09/18 19:50:40 Ready to marshal response ...
	2024/09/18 19:50:40 Ready to write response ...
	2024/09/18 19:50:46 Ready to marshal response ...
	2024/09/18 19:50:46 Ready to write response ...
	2024/09/18 19:50:46 Ready to marshal response ...
	2024/09/18 19:50:46 Ready to write response ...
	2024/09/18 19:50:46 Ready to marshal response ...
	2024/09/18 19:50:46 Ready to write response ...
	2024/09/18 19:50:54 Ready to marshal response ...
	2024/09/18 19:50:54 Ready to write response ...
	2024/09/18 19:50:56 Ready to marshal response ...
	2024/09/18 19:50:56 Ready to write response ...
	2024/09/18 19:50:58 Ready to marshal response ...
	2024/09/18 19:50:58 Ready to write response ...
	2024/09/18 19:50:58 Ready to marshal response ...
	2024/09/18 19:50:58 Ready to write response ...
	2024/09/18 19:50:58 Ready to marshal response ...
	2024/09/18 19:50:58 Ready to write response ...
	2024/09/18 19:51:07 Ready to marshal response ...
	2024/09/18 19:51:07 Ready to write response ...
	2024/09/18 19:51:25 Ready to marshal response ...
	2024/09/18 19:51:25 Ready to write response ...
	2024/09/18 19:51:36 Ready to marshal response ...
	2024/09/18 19:51:36 Ready to write response ...
	
	
	==> kernel <==
	 19:51:48 up 34 min,  0 users,  load average: 1.00, 0.64, 0.51
	Linux addons-457129 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9a08629dca4d] <==
	W0918 19:42:24.524204       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0918 19:42:24.629252       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0918 19:42:24.933501       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0918 19:42:25.282794       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0918 19:50:58.026545       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.109.71"}
	I0918 19:51:03.574284       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0918 19:51:12.552213       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0918 19:51:24.387811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:24.387861       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:24.401226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:24.401280       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:24.402784       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:24.402827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:24.412390       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:24.412455       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:24.424279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:24.424321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:24.888866       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:51:25.055030       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.100.245"}
	W0918 19:51:25.403908       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:51:25.425291       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:51:25.435129       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:51:27.878485       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:51:28.893800       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:51:36.613342       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.3.253"}
	
	
	==> kube-controller-manager [b53b2c10175a] <==
	I0918 19:51:34.336690       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 19:51:36.440441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.444326ms"
	I0918 19:51:36.445572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.081273ms"
	I0918 19:51:36.445656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.09µs"
	I0918 19:51:36.451833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.869µs"
	W0918 19:51:37.215415       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:37.215455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:51:37.984504       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0918 19:51:38.540849       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0918 19:51:38.542428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.087µs"
	I0918 19:51:38.545043       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0918 19:51:39.995096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.057838ms"
	I0918 19:51:39.995190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="52.278µs"
	W0918 19:51:41.090120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:41.090159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:42.474152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:42.474197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:42.525760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:42.525803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:43.753618       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:43.753665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:44.114022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:44.114078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:51:44.902343       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0918 19:51:46.919526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.094µs"
	
	
	==> kube-proxy [48554c8d2b4c] <==
	I0918 19:39:08.029487       1 server_linux.go:66] "Using iptables proxy"
	I0918 19:39:08.517511       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0918 19:39:08.517580       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:09.016073       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 19:39:09.016153       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:09.116193       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:09.116939       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:09.116967       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:09.207346       1 config.go:199] "Starting service config controller"
	I0918 19:39:09.207416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:09.207464       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:09.207469       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:09.311125       1 config.go:328] "Starting node config controller"
	I0918 19:39:09.311166       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:09.412936       1 shared_informer.go:320] Caches are synced for node config
	I0918 19:39:09.417216       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:09.507916       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [90ef02d18670] <==
	W0918 19:38:56.824797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0918 19:38:56.824802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:38:56.824814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0918 19:38:56.824820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:56.824908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:38:56.824934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:56.824916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:56.824974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:56.825033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:56.825057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:56.825111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:38:56.825131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.686731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:57.686774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.741328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:38:57.741373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.766648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:38:57.766682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.810182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:57.810217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.941184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:57.941231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.974520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:57.974556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0918 19:38:58.322705       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:51:41 addons-457129 kubelet[2447]: I0918 19:51:41.928453    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c9d7fe-35bd-4195-aaf1-14b4cd27848f-kube-api-access-72rls" (OuterVolumeSpecName: "kube-api-access-72rls") pod "63c9d7fe-35bd-4195-aaf1-14b4cd27848f" (UID: "63c9d7fe-35bd-4195-aaf1-14b4cd27848f"). InnerVolumeSpecName "kube-api-access-72rls". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:41 addons-457129 kubelet[2447]: I0918 19:51:41.928446    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c9d7fe-35bd-4195-aaf1-14b4cd27848f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "63c9d7fe-35bd-4195-aaf1-14b4cd27848f" (UID: "63c9d7fe-35bd-4195-aaf1-14b4cd27848f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:51:42 addons-457129 kubelet[2447]: I0918 19:51:42.005089    2447 scope.go:117] "RemoveContainer" containerID="df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960"
	Sep 18 19:51:42 addons-457129 kubelet[2447]: I0918 19:51:42.019370    2447 scope.go:117] "RemoveContainer" containerID="df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960"
	Sep 18 19:51:42 addons-457129 kubelet[2447]: E0918 19:51:42.020153    2447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960" containerID="df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960"
	Sep 18 19:51:42 addons-457129 kubelet[2447]: I0918 19:51:42.020202    2447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960"} err="failed to get container status \"df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960\": rpc error: code = Unknown desc = Error response from daemon: No such container: df1d7bfc470b870526be4429c625921ed400074cea89715731ea7d841e39c960"
	Sep 18 19:51:42 addons-457129 kubelet[2447]: I0918 19:51:42.027461    2447 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63c9d7fe-35bd-4195-aaf1-14b4cd27848f-webhook-cert\") on node \"addons-457129\" DevicePath \"\""
	Sep 18 19:51:42 addons-457129 kubelet[2447]: I0918 19:51:42.027492    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-72rls\" (UniqueName: \"kubernetes.io/projected/63c9d7fe-35bd-4195-aaf1-14b4cd27848f-kube-api-access-72rls\") on node \"addons-457129\" DevicePath \"\""
	Sep 18 19:51:43 addons-457129 kubelet[2447]: I0918 19:51:43.317249    2447 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cnkhj" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 19:51:43 addons-457129 kubelet[2447]: E0918 19:51:43.319077    2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe"
	Sep 18 19:51:43 addons-457129 kubelet[2447]: I0918 19:51:43.330644    2447 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c9d7fe-35bd-4195-aaf1-14b4cd27848f" path="/var/lib/kubelet/pods/63c9d7fe-35bd-4195-aaf1-14b4cd27848f/volumes"
	Sep 18 19:51:46 addons-457129 kubelet[2447]: E0918 19:51:46.318550    2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bea8b2a1-c35f-4acc-9aea-62afe7b40099"
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.654133    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe" (UID: "1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.654135    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-gcp-creds\") pod \"1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe\" (UID: \"1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe\") "
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.654211    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2dfm\" (UniqueName: \"kubernetes.io/projected/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-kube-api-access-v2dfm\") pod \"1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe\" (UID: \"1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe\") "
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.654315    2447 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-gcp-creds\") on node \"addons-457129\" DevicePath \"\""
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.656108    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-kube-api-access-v2dfm" (OuterVolumeSpecName: "kube-api-access-v2dfm") pod "1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe" (UID: "1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe"). InnerVolumeSpecName "kube-api-access-v2dfm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:46 addons-457129 kubelet[2447]: I0918 19:51:46.755409    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v2dfm\" (UniqueName: \"kubernetes.io/projected/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe-kube-api-access-v2dfm\") on node \"addons-457129\" DevicePath \"\""
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.323700    2447 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe" path="/var/lib/kubelet/pods/1dfdec76-47fb-4fa9-8ca4-15f9c0ffbbbe/volumes"
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.358689    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qc55\" (UniqueName: \"kubernetes.io/projected/e15cccd8-7fcb-48c9-9dc2-e79744e87759-kube-api-access-8qc55\") pod \"e15cccd8-7fcb-48c9-9dc2-e79744e87759\" (UID: \"e15cccd8-7fcb-48c9-9dc2-e79744e87759\") "
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.358741    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w4bf\" (UniqueName: \"kubernetes.io/projected/b2944da3-d9b7-4de7-8a57-f934ec8b2970-kube-api-access-5w4bf\") pod \"b2944da3-d9b7-4de7-8a57-f934ec8b2970\" (UID: \"b2944da3-d9b7-4de7-8a57-f934ec8b2970\") "
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.360529    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15cccd8-7fcb-48c9-9dc2-e79744e87759-kube-api-access-8qc55" (OuterVolumeSpecName: "kube-api-access-8qc55") pod "e15cccd8-7fcb-48c9-9dc2-e79744e87759" (UID: "e15cccd8-7fcb-48c9-9dc2-e79744e87759"). InnerVolumeSpecName "kube-api-access-8qc55". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.360596    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2944da3-d9b7-4de7-8a57-f934ec8b2970-kube-api-access-5w4bf" (OuterVolumeSpecName: "kube-api-access-5w4bf") pod "b2944da3-d9b7-4de7-8a57-f934ec8b2970" (UID: "b2944da3-d9b7-4de7-8a57-f934ec8b2970"). InnerVolumeSpecName "kube-api-access-5w4bf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.459877    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8qc55\" (UniqueName: \"kubernetes.io/projected/e15cccd8-7fcb-48c9-9dc2-e79744e87759-kube-api-access-8qc55\") on node \"addons-457129\" DevicePath \"\""
	Sep 18 19:51:47 addons-457129 kubelet[2447]: I0918 19:51:47.459947    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5w4bf\" (UniqueName: \"kubernetes.io/projected/b2944da3-d9b7-4de7-8a57-f934ec8b2970-kube-api-access-5w4bf\") on node \"addons-457129\" DevicePath \"\""
	
	
	==> storage-provisioner [2158d332c278] <==
	I0918 19:39:13.009574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:13.028274       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:13.028324       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:13.118382       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:13.118576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-457129_177a6186-8cf5-479c-aa3e-e3c420f4cd6b!
	I0918 19:39:13.120070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"923447e7-add4-4524-920c-813a29a5a966", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-457129_177a6186-8cf5-479c-aa3e-e3c420f4cd6b became leader
	I0918 19:39:13.219592       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-457129_177a6186-8cf5-479c-aa3e-e3c420f4cd6b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-457129 -n addons-457129
helpers_test.go:261: (dbg) Run:  kubectl --context addons-457129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-457129 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-457129 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-457129/192.168.49.2
	Start Time:       Wed, 18 Sep 2024 19:42:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8tk4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p8tk4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-457129
	  Normal   Pulling    7m43s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.51s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 11.96
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.98
21 TestBinaryMirror 0.76
22 TestOffline 71.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.35
29 TestAddons/serial/Volcano 40.79
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.05
35 TestAddons/parallel/InspektorGadget 11.71
36 TestAddons/parallel/MetricsServer 6.66
37 TestAddons/parallel/HelmTiller 9.73
39 TestAddons/parallel/CSI 49.41
40 TestAddons/parallel/Headlamp 17.87
41 TestAddons/parallel/CloudSpanner 5.45
42 TestAddons/parallel/LocalPath 54.13
43 TestAddons/parallel/NvidiaDevicePlugin 6.42
44 TestAddons/parallel/Yakd 10.59
45 TestAddons/StoppedEnableDisable 11.08
46 TestCertOptions 33.97
47 TestCertExpiration 232.55
48 TestDockerFlags 31.12
49 TestForceSystemdFlag 34.04
50 TestForceSystemdEnv 30.45
52 TestKVMDriverInstallOrUpdate 3.59
56 TestErrorSpam/setup 24.28
57 TestErrorSpam/start 0.56
58 TestErrorSpam/status 0.85
59 TestErrorSpam/pause 1.15
60 TestErrorSpam/unpause 1.37
61 TestErrorSpam/stop 1.86
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 66.97
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.68
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.43
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.29
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.4
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.94
84 TestFunctional/serial/LogsFileCmd 0.99
85 TestFunctional/serial/InvalidService 3.88
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 10.48
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.17
91 TestFunctional/parallel/StatusCmd 0.9
95 TestFunctional/parallel/ServiceCmdConnect 11.51
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 34.03
99 TestFunctional/parallel/SSHCmd 0.64
100 TestFunctional/parallel/CpCmd 1.95
101 TestFunctional/parallel/MySQL 27.17
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.61
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
111 TestFunctional/parallel/License 0.68
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.25
118 TestFunctional/parallel/ServiceCmd/List 0.51
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
121 TestFunctional/parallel/ServiceCmd/Format 0.46
122 TestFunctional/parallel/ServiceCmd/URL 0.44
123 TestFunctional/parallel/Version/short 0.05
124 TestFunctional/parallel/Version/components 0.52
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
130 TestFunctional/parallel/ImageCommands/Setup 1.94
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
138 TestFunctional/parallel/ProfileCmd/profile_list 0.39
139 TestFunctional/parallel/MountCmd/any-port 7.96
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.99
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
148 TestFunctional/parallel/DockerEnv/bash 1.13
149 TestFunctional/parallel/MountCmd/specific-port 1.8
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 102.52
161 TestMultiControlPlane/serial/DeployApp 6.41
162 TestMultiControlPlane/serial/PingHostFromPods 1.06
163 TestMultiControlPlane/serial/AddWorkerNode 19.99
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
166 TestMultiControlPlane/serial/CopyFile 15.61
167 TestMultiControlPlane/serial/StopSecondaryNode 11.34
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
169 TestMultiControlPlane/serial/RestartSecondaryNode 35.61
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 176.52
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.44
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
174 TestMultiControlPlane/serial/StopCluster 32.57
175 TestMultiControlPlane/serial/RestartCluster 81.68
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
177 TestMultiControlPlane/serial/AddSecondaryNode 34.05
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
181 TestImageBuild/serial/Setup 25.05
182 TestImageBuild/serial/NormalBuild 2.56
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.81
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
189 TestJSONOutput/start/Command 69.17
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.54
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.41
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.71
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
214 TestKicCustomNetwork/create_custom_network 26.83
215 TestKicCustomNetwork/use_default_bridge_network 24.02
216 TestKicExistingNetwork 25.8
217 TestKicCustomSubnet 26.38
218 TestKicStaticIP 26.1
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 49.81
223 TestMountStart/serial/StartWithMountFirst 7.69
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 7.57
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.46
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.88
231 TestMountStart/serial/VerifyMountPostStop 0.24
234 TestMultiNode/serial/FreshStart2Nodes 72.96
235 TestMultiNode/serial/DeployApp2Nodes 39.31
236 TestMultiNode/serial/PingHostFrom2Pods 0.72
237 TestMultiNode/serial/AddNode 18.96
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.68
240 TestMultiNode/serial/CopyFile 9.11
241 TestMultiNode/serial/StopNode 2.1
242 TestMultiNode/serial/StartAfterStop 9.84
243 TestMultiNode/serial/RestartKeepsNodes 96.51
244 TestMultiNode/serial/DeleteNode 5.17
245 TestMultiNode/serial/StopMultiNode 21.4
246 TestMultiNode/serial/RestartMultiNode 55.71
247 TestMultiNode/serial/ValidateNameConflict 24.11
252 TestPreload 114.64
254 TestScheduledStopUnix 93.57
255 TestSkaffold 106.59
257 TestInsufficientStorage 12.92
258 TestRunningBinaryUpgrade 101.65
260 TestKubernetesUpgrade 340.58
261 TestMissingContainerUpgrade 98.8
263 TestStoppedBinaryUpgrade/Setup 2.56
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
276 TestNoKubernetes/serial/StartWithK8s 30.86
277 TestStoppedBinaryUpgrade/Upgrade 152.17
285 TestNoKubernetes/serial/StartWithStopK8s 20.62
286 TestNoKubernetes/serial/Start 9.11
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
288 TestNoKubernetes/serial/ProfileList 0.95
289 TestNoKubernetes/serial/Stop 1.18
290 TestNoKubernetes/serial/StartNoArgs 8.47
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
293 TestPause/serial/Start 38.98
294 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
295 TestPause/serial/SecondStartNoReconfiguration 30.42
296 TestPause/serial/Pause 0.49
297 TestPause/serial/VerifyStatus 0.29
298 TestPause/serial/Unpause 0.44
299 TestPause/serial/PauseAgain 0.59
300 TestPause/serial/DeletePaused 2.04
301 TestPause/serial/VerifyDeletedResources 2.3
302 TestNetworkPlugins/group/auto/Start 43
303 TestNetworkPlugins/group/kindnet/Start 58.52
304 TestNetworkPlugins/group/auto/KubeletFlags 0.28
305 TestNetworkPlugins/group/auto/NetCatPod 10.22
306 TestNetworkPlugins/group/auto/DNS 0.13
307 TestNetworkPlugins/group/auto/Localhost 0.11
308 TestNetworkPlugins/group/auto/HairPin 0.12
309 TestNetworkPlugins/group/calico/Start 67.45
310 TestNetworkPlugins/group/false/Start 73.49
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
313 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
314 TestNetworkPlugins/group/kindnet/DNS 0.15
315 TestNetworkPlugins/group/kindnet/Localhost 0.15
316 TestNetworkPlugins/group/kindnet/HairPin 0.14
317 TestNetworkPlugins/group/enable-default-cni/Start 38.75
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.26
320 TestNetworkPlugins/group/calico/NetCatPod 8.19
321 TestNetworkPlugins/group/calico/DNS 0.13
322 TestNetworkPlugins/group/calico/Localhost 0.11
323 TestNetworkPlugins/group/calico/HairPin 0.12
324 TestNetworkPlugins/group/false/KubeletFlags 0.28
325 TestNetworkPlugins/group/false/NetCatPod 10.25
326 TestNetworkPlugins/group/flannel/Start 42.97
327 TestNetworkPlugins/group/false/DNS 0.15
328 TestNetworkPlugins/group/false/Localhost 0.12
329 TestNetworkPlugins/group/false/HairPin 0.12
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.21
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
335 TestNetworkPlugins/group/bridge/Start 75.78
336 TestNetworkPlugins/group/kubenet/Start 37.91
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
339 TestNetworkPlugins/group/flannel/NetCatPod 10.2
340 TestNetworkPlugins/group/flannel/DNS 0.15
341 TestNetworkPlugins/group/flannel/Localhost 0.12
342 TestNetworkPlugins/group/flannel/HairPin 0.12
343 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
344 TestNetworkPlugins/group/kubenet/NetCatPod 10.21
345 TestNetworkPlugins/group/custom-flannel/Start 47.69
346 TestNetworkPlugins/group/kubenet/DNS 0.14
347 TestNetworkPlugins/group/kubenet/Localhost 0.12
348 TestNetworkPlugins/group/kubenet/HairPin 0.12
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
350 TestNetworkPlugins/group/bridge/NetCatPod 9.23
352 TestStartStop/group/old-k8s-version/serial/FirstStart 135.86
353 TestNetworkPlugins/group/bridge/DNS 0.16
354 TestNetworkPlugins/group/bridge/Localhost 0.13
355 TestNetworkPlugins/group/bridge/HairPin 0.14
357 TestStartStop/group/no-preload/serial/FirstStart 72.91
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.21
361 TestStartStop/group/embed-certs/serial/FirstStart 40.71
362 TestNetworkPlugins/group/custom-flannel/DNS 0.14
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.86
367 TestStartStop/group/embed-certs/serial/DeployApp 10.26
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
369 TestStartStop/group/embed-certs/serial/Stop 10.75
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
371 TestStartStop/group/embed-certs/serial/SecondStart 263.23
372 TestStartStop/group/no-preload/serial/DeployApp 9.25
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
374 TestStartStop/group/no-preload/serial/Stop 10.69
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
376 TestStartStop/group/no-preload/serial/SecondStart 262.78
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.73
380 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.85
383 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
384 TestStartStop/group/old-k8s-version/serial/Stop 10.87
385 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
386 TestStartStop/group/old-k8s-version/serial/SecondStart 23.49
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 27.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/old-k8s-version/serial/Pause 2.39
392 TestStartStop/group/newest-cni/serial/FirstStart 27.93
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
395 TestStartStop/group/newest-cni/serial/Stop 5.71
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
397 TestStartStop/group/newest-cni/serial/SecondStart 15.02
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
401 TestStartStop/group/newest-cni/serial/Pause 2.7
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
405 TestStartStop/group/embed-certs/serial/Pause 2.34
406 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/no-preload/serial/Pause 2.34
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.33
x
+
TestDownloadOnly/v1.20.0/json-events (13.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-333094 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-333094 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.082156954s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 19:38:08.781954   14329 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0918 19:38:08.782074   14329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-333094
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-333094: exit status 85 (58.623105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-333094 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |          |
	|         | -p download-only-333094        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:55
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:55.737988   14341 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:55.738249   14341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:55.738260   14341 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:55.738264   14341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:55.738441   14341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	W0918 19:37:55.738570   14341 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-7499/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-7499/.minikube/config/config.json: no such file or directory
	I0918 19:37:55.739144   14341 out.go:352] Setting JSON to true
	I0918 19:37:55.739984   14341 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1222,"bootTime":1726687054,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:37:55.740080   14341 start.go:139] virtualization: kvm guest
	I0918 19:37:55.742511   14341 out.go:97] [download-only-333094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:37:55.742648   14341 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:55.742690   14341 notify.go:220] Checking for updates...
	I0918 19:37:55.744522   14341 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:55.746304   14341 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:55.747835   14341 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:37:55.749436   14341 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	I0918 19:37:55.750904   14341 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0918 19:37:55.754208   14341 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:37:55.754433   14341 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:37:55.776710   14341 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:37:55.776803   14341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:56.161115   14341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 19:37:56.152070854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:37:56.161221   14341 docker.go:318] overlay module found
	I0918 19:37:56.162920   14341 out.go:97] Using the docker driver based on user configuration
	I0918 19:37:56.162946   14341 start.go:297] selected driver: docker
	I0918 19:37:56.162954   14341 start.go:901] validating driver "docker" against <nil>
	I0918 19:37:56.163032   14341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:56.209011   14341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 19:37:56.200105815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:37:56.209199   14341 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:37:56.209672   14341 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0918 19:37:56.209824   14341 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:37:56.211912   14341 out.go:169] Using Docker driver with root privileges
	I0918 19:37:56.213315   14341 cni.go:84] Creating CNI manager for ""
	I0918 19:37:56.213380   14341 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 19:37:56.213445   14341 start.go:340] cluster config:
	{Name:download-only-333094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-333094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:37:56.214868   14341 out.go:97] Starting "download-only-333094" primary control-plane node in "download-only-333094" cluster
	I0918 19:37:56.214881   14341 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:37:56.216322   14341 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:37:56.216346   14341 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:56.216391   14341 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:37:56.232162   14341 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:37:56.232341   14341 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:37:56.232438   14341 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:37:56.366164   14341 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0918 19:37:56.366197   14341 cache.go:56] Caching tarball of preloaded images
	I0918 19:37:56.366327   14341 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:56.368253   14341 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 19:37:56.368273   14341 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0918 19:37:56.473971   14341 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-333094 host does not exist
	  To start a cluster, run: "minikube start -p download-only-333094"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-333094
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-598590 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-598590 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.95707946s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 19:38:21.123095   14329 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:21.123139   14329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-598590
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-598590: exit status 85 (59.374301ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-333094 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p download-only-333094        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-333094        | download-only-333094 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only        | download-only-598590 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-598590        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:09.202967   14721 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:09.203077   14721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:09.203088   14721 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:09.203092   14721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:09.203270   14721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 19:38:09.203809   14721 out.go:352] Setting JSON to true
	I0918 19:38:09.204674   14721 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1235,"bootTime":1726687054,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:09.204772   14721 start.go:139] virtualization: kvm guest
	I0918 19:38:09.207038   14721 out.go:97] [download-only-598590] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:09.207160   14721 notify.go:220] Checking for updates...
	I0918 19:38:09.208843   14721 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:38:09.210390   14721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:09.211741   14721 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:38:09.213150   14721 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	I0918 19:38:09.214738   14721 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0918 19:38:09.217293   14721 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:38:09.217531   14721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:09.238194   14721 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:38:09.238272   14721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:09.287905   14721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 19:38:09.279027349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:38:09.288015   14721 docker.go:318] overlay module found
	I0918 19:38:09.290035   14721 out.go:97] Using the docker driver based on user configuration
	I0918 19:38:09.290074   14721 start.go:297] selected driver: docker
	I0918 19:38:09.290082   14721 start.go:901] validating driver "docker" against <nil>
	I0918 19:38:09.290193   14721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:09.333947   14721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 19:38:09.325779635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:38:09.334130   14721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:09.334605   14721 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0918 19:38:09.334743   14721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:38:09.336837   14721 out.go:169] Using Docker driver with root privileges
	I0918 19:38:09.338321   14721 cni.go:84] Creating CNI manager for ""
	I0918 19:38:09.338378   14721 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:09.338391   14721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:09.338458   14721 start.go:340] cluster config:
	{Name:download-only-598590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-598590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:09.339885   14721 out.go:97] Starting "download-only-598590" primary control-plane node in "download-only-598590" cluster
	I0918 19:38:09.339903   14721 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:38:09.341133   14721 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:38:09.341153   14721 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:09.341250   14721 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:38:09.356596   14721 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:38:09.356711   14721 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:38:09.356726   14721 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 19:38:09.356733   14721 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 19:38:09.356742   14721 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 19:38:09.821045   14721 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0918 19:38:09.821093   14721 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:09.821270   14721 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:09.823176   14721 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0918 19:38:09.823195   14721 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0918 19:38:09.928548   14721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19667-7499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-598590 host does not exist
	  To start a cluster, run: "minikube start -p download-only-598590"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-598590
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-630296 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-630296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-630296
--- PASS: TestDownloadOnlyKic (0.98s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 19:38:22.727818   14329 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-336155 --alsologtostderr --binary-mirror http://127.0.0.1:40427 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-336155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-336155
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (71.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-244577 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-244577 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m9.313237733s)
helpers_test.go:175: Cleaning up "offline-docker-244577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-244577
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-244577: (2.108401887s)
--- PASS: TestOffline (71.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-457129
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-457129: exit status 85 (45.690435ms)

                                                
                                                
-- stdout --
	* Profile "addons-457129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-457129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-457129
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-457129: exit status 85 (48.874232ms)

                                                
                                                
-- stdout --
	* Profile "addons-457129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-457129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-457129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-457129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m28.352564092s)
--- PASS: TestAddons/Setup (208.35s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 12.762644ms
addons_test.go:905: volcano-admission stabilized in 12.971958ms
addons_test.go:897: volcano-scheduler stabilized in 13.023918ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-cwln4" [b12a4e55-db68-4f52-aa14-ce0e4e0b57b9] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00362581s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-m6qc7" [977e074f-9f32-4392-82cb-fdd538a97e1e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003699348s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-c8n97" [f741f381-9b31-414e-b184-cb97ba423bc0] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002850775s
addons_test.go:932: (dbg) Run:  kubectl --context addons-457129 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-457129 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-457129 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3f8e4ec0-4f9b-4192-8ac0-45254b749cc0] Pending
helpers_test.go:344: "test-job-nginx-0" [3f8e4ec0-4f9b-4192-8ac0-45254b749cc0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3f8e4ec0-4f9b-4192-8ac0-45254b749cc0] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003194277s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable volcano --alsologtostderr -v=1: (10.444145071s)
--- PASS: TestAddons/serial/Volcano (40.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-457129 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-457129 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-457129 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-457129 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-457129 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ffaeb341-e09a-4daa-904a-fb34c3bed5f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ffaeb341-e09a-4daa-904a-fb34c3bed5f5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003093336s
I0918 19:51:36.066095   14329 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-457129 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable ingress-dns --alsologtostderr -v=1: (1.086442719s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable ingress --alsologtostderr -v=1: (7.73479191s)
--- PASS: TestAddons/parallel/Ingress (21.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6cjrz" [4aa930ae-926b-4427-88c6-85c98bb4a171] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004325283s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-457129
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-457129: (5.704135982s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.019475ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h4fl2" [a77387ba-6450-4ffe-9aa7-de0bf96f74da] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003322872s
addons_test.go:417: (dbg) Run:  kubectl --context addons-457129 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.66s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.73s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
I0918 19:50:35.225348   14329 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:458: tiller-deploy stabilized in 2.666007ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
I0918 19:50:35.229521   14329 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:50:35.229547   14329 kapi.go:107] duration metric: took 4.213023ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-k64kf" [b79b5bb4-d211-4aa6-9551-5d2305acc2b2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002735012s
addons_test.go:475: (dbg) Run:  kubectl --context addons-457129 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-457129 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.284632226s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.223762ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-457129 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-457129 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a0af7c75-839b-45a2-b96d-455e93463115] Pending
helpers_test.go:344: "task-pv-pod" [a0af7c75-839b-45a2-b96d-455e93463115] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a0af7c75-839b-45a2-b96d-455e93463115] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003612329s
addons_test.go:590: (dbg) Run:  kubectl --context addons-457129 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-457129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-457129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-457129 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-457129 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-457129 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-457129 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [88d04a8c-6ebc-4b06-a01c-ea39851b10c4] Pending
helpers_test.go:344: "task-pv-pod-restore" [88d04a8c-6ebc-4b06-a01c-ea39851b10c4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [88d04a8c-6ebc-4b06-a01c-ea39851b10c4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004187239s
addons_test.go:632: (dbg) Run:  kubectl --context addons-457129 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-457129 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-457129 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.49567572s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-457129 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-457129 --alsologtostderr -v=1: (1.295044306s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-vg68f" [7e8343e0-1e87-4f26-a6bf-214b71ba3ea6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-vg68f" [7e8343e0-1e87-4f26-a6bf-214b71ba3ea6] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003361309s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable headlamp --alsologtostderr -v=1: (5.574339237s)
--- PASS: TestAddons/parallel/Headlamp (17.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8f6dc" [5d4bc841-17c7-4a27-b86c-f7798c60a49b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003693296s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-457129
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-457129 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-457129 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd982d37-fe2b-454d-b088-20571de78024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd982d37-fe2b-454d-b088-20571de78024] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd982d37-fe2b-454d-b088-20571de78024] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003121203s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-457129 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 ssh "cat /opt/local-path-provisioner/pvc-6c5c3a13-dc76-4ea5-ae23-b00403f48891_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-457129 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-457129 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.313959808s)
--- PASS: TestAddons/parallel/LocalPath (54.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5p5wt" [b9ee4eb2-3471-4a3f-83b8-cd8dabafe83c] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004006037s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-457129
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lgxj7" [06e0cc8b-1c33-4525-a1d4-b1f7c5b76315] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003768281s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-457129 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-457129 addons disable yakd --alsologtostderr -v=1: (5.580945087s)
--- PASS: TestAddons/parallel/Yakd (10.59s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-457129
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-457129: (10.836506644s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-457129
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-457129
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-457129
--- PASS: TestAddons/StoppedEnableDisable (11.08s)

                                                
                                    
x
+
TestCertOptions (33.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-925350 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-925350 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (30.216415867s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-925350 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-925350 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-925350 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-925350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-925350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-925350: (2.888935359s)
--- PASS: TestCertOptions (33.97s)

                                                
                                    
x
+
TestCertExpiration (232.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-301131 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-301131 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (29.473262268s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-301131 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0918 20:26:29.239000   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-301131 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.891029833s)
helpers_test.go:175: Cleaning up "cert-expiration-301131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-301131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-301131: (2.189906293s)
--- PASS: TestCertExpiration (232.55s)

                                                
                                    
x
+
TestDockerFlags (31.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-286778 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-286778 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.38497976s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-286778 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-286778 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-286778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-286778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-286778: (2.134742614s)
--- PASS: TestDockerFlags (31.12s)

                                                
                                    
x
+
TestForceSystemdFlag (34.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-222107 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-222107 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.622180584s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-222107 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-222107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-222107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-222107: (2.040454407s)
--- PASS: TestForceSystemdFlag (34.04s)

                                                
                                    
x
+
TestForceSystemdEnv (30.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-944111 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-944111 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.110740586s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-944111 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-944111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-944111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-944111: (2.044642094s)
--- PASS: TestForceSystemdEnv (30.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0918 20:21:49.591917   14329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:21:49.592080   14329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0918 20:21:49.620816   14329 install.go:62] docker-machine-driver-kvm2: exit status 1
W0918 20:21:49.621235   14329 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:21:49.621303   14329 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3090212166/001/docker-machine-driver-kvm2
I0918 20:21:49.907522   14329 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3090212166/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc0001290c0 gz:0xc0001290c8 tar:0xc000129070 tar.bz2:0xc000129080 tar.gz:0xc000129090 tar.xz:0xc0001290a0 tar.zst:0xc0001290b0 tbz2:0xc000129080 tgz:0xc000129090 txz:0xc0001290a0 tzst:0xc0001290b0 xz:0xc0001290d0 zip:0xc0001290e0 zst:0xc0001290d8] Getters:map[file:0xc001af1410 http:0xc000528af0 https:0xc000528b40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:21:49.907579   14329 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3090212166/001/docker-machine-driver-kvm2
I0918 20:21:51.728114   14329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:21:51.728211   14329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0918 20:21:51.755744   14329 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0918 20:21:51.755778   14329 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0918 20:21:51.755834   14329 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:21:51.755860   14329 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3090212166/002/docker-machine-driver-kvm2
I0918 20:21:51.815655   14329 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3090212166/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc0001290c0 gz:0xc0001290c8 tar:0xc000129070 tar.bz2:0xc000129080 tar.gz:0xc000129090 tar.xz:0xc0001290a0 tar.zst:0xc0001290b0 tbz2:0xc000129080 tgz:0xc000129090 txz:0xc0001290a0 tzst:0xc0001290b0 xz:0xc0001290d0 zip:0xc0001290e0 zst:0xc0001290d8] Getters:map[file:0xc00066a760 http:0xc001a0f310 https:0xc001a0f360] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:21:51.815701   14329 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3090212166/002/docker-machine-driver-kvm2
E0918 20:21:51.888723   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                    
x
+
TestErrorSpam/setup (24.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-218621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-218621 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-218621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-218621 --driver=docker  --container-runtime=docker: (24.284671305s)
--- PASS: TestErrorSpam/setup (24.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 stop: (1.688086052s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-218621 --log_dir /tmp/nospam-218621 stop
--- PASS: TestErrorSpam/stop (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-7499/.minikube/files/etc/test/nested/copy/14329/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-180257 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.971272509s)
--- PASS: TestFunctional/serial/StartWithProxy (66.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 19:53:41.105671   14329 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-180257 --alsologtostderr -v=8: (33.67905362s)
functional_test.go:663: soft start took 33.679905165s for "functional-180257" cluster.
I0918 19:54:14.785192   14329 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-180257 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-180257 /tmp/TestFunctionalserialCacheCmdcacheadd_local3338649365/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache add minikube-local-cache-test:functional-180257
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-180257 cache add minikube-local-cache-test:functional-180257: (1.104533848s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache delete minikube-local-cache-test:functional-180257
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-180257
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (259.947108ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 kubectl -- --context functional-180257 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-180257 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-180257 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.398068612s)
functional_test.go:761: restart took 41.398208186s for "functional-180257" cluster.
I0918 19:55:02.106792   14329 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-180257 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 logs --file /tmp/TestFunctionalserialLogsFileCmd4178305198/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-180257 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-180257
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-180257: exit status 115 (318.43088ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30347 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-180257 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 config get cpus: exit status 14 (80.156921ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 config get cpus: exit status 14 (58.153081ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180257 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180257 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 67639: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180257 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (148.184941ms)

                                                
                                                
-- stdout --
	* [functional-180257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:55:22.809453   66678 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:55:22.809553   66678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:55:22.809561   66678 out.go:358] Setting ErrFile to fd 2...
	I0918 19:55:22.809566   66678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:55:22.809777   66678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 19:55:22.810309   66678 out.go:352] Setting JSON to false
	I0918 19:55:22.811416   66678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2269,"bootTime":1726687054,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:55:22.811524   66678 start.go:139] virtualization: kvm guest
	I0918 19:55:22.813926   66678 out.go:177] * [functional-180257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:55:22.815423   66678 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:55:22.815451   66678 notify.go:220] Checking for updates...
	I0918 19:55:22.818177   66678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:55:22.819542   66678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:55:22.820958   66678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	I0918 19:55:22.822225   66678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:55:22.823543   66678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:55:22.825657   66678 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:55:22.826184   66678 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:55:22.850230   66678 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:55:22.850321   66678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:55:22.904464   66678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:55 SystemTime:2024-09-18 19:55:22.895027047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:55:22.904565   66678 docker.go:318] overlay module found
	I0918 19:55:22.906693   66678 out.go:177] * Using the docker driver based on existing profile
	I0918 19:55:22.908299   66678 start.go:297] selected driver: docker
	I0918 19:55:22.908315   66678 start.go:901] validating driver "docker" against &{Name:functional-180257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-180257 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:55:22.908403   66678 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:55:22.910742   66678 out.go:201] 
	W0918 19:55:22.912054   66678 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 19:55:22.913618   66678 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180257 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180257 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.148318ms)

                                                
                                                
-- stdout --
	* [functional-180257] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:55:23.164548   66931 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:55:23.164668   66931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:55:23.164674   66931 out.go:358] Setting ErrFile to fd 2...
	I0918 19:55:23.164680   66931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:55:23.165064   66931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 19:55:23.165624   66931 out.go:352] Setting JSON to false
	I0918 19:55:23.166620   66931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2269,"bootTime":1726687054,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:55:23.166735   66931 start.go:139] virtualization: kvm guest
	I0918 19:55:23.169329   66931 out.go:177] * [functional-180257] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0918 19:55:23.171163   66931 notify.go:220] Checking for updates...
	I0918 19:55:23.171313   66931 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:55:23.173000   66931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:55:23.174381   66931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	I0918 19:55:23.175951   66931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	I0918 19:55:23.177692   66931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:55:23.180477   66931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:55:23.182809   66931 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:55:23.183421   66931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:55:23.210829   66931 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:55:23.210919   66931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:55:23.270657   66931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 19:55:23.259462593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:55:23.270747   66931 docker.go:318] overlay module found
	I0918 19:55:23.272848   66931 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0918 19:55:23.274222   66931 start.go:297] selected driver: docker
	I0918 19:55:23.274236   66931 start.go:901] validating driver "docker" against &{Name:functional-180257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-180257 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:55:23.274343   66931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:55:23.276829   66931 out.go:201] 
	W0918 19:55:23.278274   66931 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 19:55:23.279985   66931 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-180257 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-180257 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-w9g5r" [aae73896-1e65-4f13-b3f9-f4cbc716f011] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-w9g5r" [aae73896-1e65-4f13-b3f9-f4cbc716f011] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003083195s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32745
functional_test.go:1675: http://192.168.49.2:32745: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-w9g5r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32745
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bf9ce617-2870-40c5-a765-d02a1ef639f7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002991886s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-180257 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-180257 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-180257 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-180257 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [841720f2-bfe5-45ad-afae-a529e0c5ccd6] Pending
helpers_test.go:344: "sp-pod" [841720f2-bfe5-45ad-afae-a529e0c5ccd6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [841720f2-bfe5-45ad-afae-a529e0c5ccd6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003781846s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-180257 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-180257 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-180257 delete -f testdata/storage-provisioner/pod.yaml: (1.192444048s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-180257 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bd249e9-3828-4c25-bb52-175042cfdc7f] Pending
helpers_test.go:344: "sp-pod" [8bd249e9-3828-4c25-bb52-175042cfdc7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bd249e9-3828-4c25-bb52-175042cfdc7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003122737s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-180257 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh -n functional-180257 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cp functional-180257:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3030138843/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh -n functional-180257 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh -n functional-180257 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-180257 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-st8zc" [fba73ee5-0631-4c52-a3c0-b9882051e4c9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-st8zc" [fba73ee5-0631-4c52-a3c0-b9882051e4c9] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004704731s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-180257 exec mysql-6cdb49bbb-st8zc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-180257 exec mysql-6cdb49bbb-st8zc -- mysql -ppassword -e "show databases;": exit status 1 (108.9782ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:55:54.627955   14329 retry.go:31] will retry after 1.034953258s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-180257 exec mysql-6cdb49bbb-st8zc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-180257 exec mysql-6cdb49bbb-st8zc -- mysql -ppassword -e "show databases;": exit status 1 (106.821428ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:55:55.770118   14329 retry.go:31] will retry after 1.548479112s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-180257 exec mysql-6cdb49bbb-st8zc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14329/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /etc/test/nested/copy/14329/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14329.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /etc/ssl/certs/14329.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14329.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /usr/share/ca-certificates/14329.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/143292.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /etc/ssl/certs/143292.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/143292.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /usr/share/ca-certificates/143292.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-180257 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh "sudo systemctl is-active crio": exit status 1 (253.591504ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-180257 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-180257 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jwnnl" [73ab8921-79ae-41f3-ae7f-276622528270] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jwnnl" [73ab8921-79ae-41f3-ae7f-276622528270] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003826369s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 63692: os: process already finished
helpers_test.go:508: unable to kill pid 63274: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-180257 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1479e738-cb10-42a8-88a0-6ac829f82dc2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1479e738-cb10-42a8-88a0-6ac829f82dc2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003882359s
I0918 19:55:21.138168   14329 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service list -o json
functional_test.go:1494: Took "506.50481ms" to run "out/minikube-linux-amd64 -p functional-180257 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31479
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31479
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180257 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-180257
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-180257
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180257 image ls --format short --alsologtostderr:
I0918 19:55:34.061735   71470 out.go:345] Setting OutFile to fd 1 ...
I0918 19:55:34.061980   71470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.061988   71470 out.go:358] Setting ErrFile to fd 2...
I0918 19:55:34.061993   71470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.062193   71470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
I0918 19:55:34.062783   71470 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.062890   71470 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.063265   71470 cli_runner.go:164] Run: docker container inspect functional-180257 --format={{.State.Status}}
I0918 19:55:34.081360   71470 ssh_runner.go:195] Run: systemctl --version
I0918 19:55:34.081419   71470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180257
I0918 19:55:34.099531   71470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/functional-180257/id_rsa Username:docker}
I0918 19:55:34.193269   71470 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180257 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-180257 | f26cb45b29a90 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kicbase/echo-server               | functional-180257 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180257 image ls --format table --alsologtostderr:
I0918 19:55:34.791159   71844 out.go:345] Setting OutFile to fd 1 ...
I0918 19:55:34.791453   71844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.791465   71844 out.go:358] Setting ErrFile to fd 2...
I0918 19:55:34.791469   71844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.791662   71844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
I0918 19:55:34.792281   71844 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.792388   71844 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.792774   71844 cli_runner.go:164] Run: docker container inspect functional-180257 --format={{.State.Status}}
I0918 19:55:34.811762   71844 ssh_runner.go:195] Run: systemctl --version
I0918 19:55:34.811809   71844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180257
I0918 19:55:34.832308   71844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/functional-180257/id_rsa Username:docker}
I0918 19:55:34.929563   71844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180257 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"f26cb45b29a90d704bff895f0982852d41c7710a0541ef517836eaf89485d6cd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-180257"],"size":"30"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74
c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-180257"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["reg
istry.k8s.io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180257 image ls --format json --alsologtostderr:
I0918 19:55:34.518964   71727 out.go:345] Setting OutFile to fd 1 ...
I0918 19:55:34.519144   71727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.519166   71727 out.go:358] Setting ErrFile to fd 2...
I0918 19:55:34.519182   71727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.519619   71727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
I0918 19:55:34.521300   71727 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.521476   71727 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.522029   71727 cli_runner.go:164] Run: docker container inspect functional-180257 --format={{.State.Status}}
I0918 19:55:34.544357   71727 ssh_runner.go:195] Run: systemctl --version
I0918 19:55:34.544477   71727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180257
I0918 19:55:34.564052   71727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/functional-180257/id_rsa Username:docker}
I0918 19:55:34.709987   71727 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180257 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-180257
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f26cb45b29a90d704bff895f0982852d41c7710a0541ef517836eaf89485d6cd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-180257
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180257 image ls --format yaml --alsologtostderr:
I0918 19:55:34.275192   71588 out.go:345] Setting OutFile to fd 1 ...
I0918 19:55:34.275621   71588 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.275646   71588 out.go:358] Setting ErrFile to fd 2...
I0918 19:55:34.275654   71588 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.276080   71588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
I0918 19:55:34.277549   71588 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.277744   71588 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.278305   71588 cli_runner.go:164] Run: docker container inspect functional-180257 --format={{.State.Status}}
I0918 19:55:34.296444   71588 ssh_runner.go:195] Run: systemctl --version
I0918 19:55:34.296503   71588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180257
I0918 19:55:34.317876   71588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/functional-180257/id_rsa Username:docker}
I0918 19:55:34.418131   71588 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh pgrep buildkitd: exit status 1 (280.580052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image build -t localhost/my-image:functional-180257 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-180257 image build -t localhost/my-image:functional-180257 testdata/build --alsologtostderr: (2.974184629s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180257 image build -t localhost/my-image:functional-180257 testdata/build --alsologtostderr:
I0918 19:55:34.661749   71783 out.go:345] Setting OutFile to fd 1 ...
I0918 19:55:34.661908   71783 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.661921   71783 out.go:358] Setting ErrFile to fd 2...
I0918 19:55:34.661927   71783 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:55:34.662250   71783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
I0918 19:55:34.664654   71783 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.665512   71783 config.go:182] Loaded profile config "functional-180257": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:55:34.666093   71783 cli_runner.go:164] Run: docker container inspect functional-180257 --format={{.State.Status}}
I0918 19:55:34.685377   71783 ssh_runner.go:195] Run: systemctl --version
I0918 19:55:34.685427   71783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180257
I0918 19:55:34.702167   71783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/functional-180257/id_rsa Username:docker}
I0918 19:55:34.809556   71783 build_images.go:161] Building image from path: /tmp/build.3934220034.tar
I0918 19:55:34.809631   71783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 19:55:34.819979   71783 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3934220034.tar
I0918 19:55:34.823615   71783 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3934220034.tar: stat -c "%s %y" /var/lib/minikube/build/build.3934220034.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3934220034.tar': No such file or directory
I0918 19:55:34.823662   71783 ssh_runner.go:362] scp /tmp/build.3934220034.tar --> /var/lib/minikube/build/build.3934220034.tar (3072 bytes)
I0918 19:55:34.852214   71783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3934220034
I0918 19:55:34.862027   71783 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3934220034 -xf /var/lib/minikube/build/build.3934220034.tar
I0918 19:55:34.871602   71783 docker.go:360] Building image: /var/lib/minikube/build/build.3934220034
I0918 19:55:34.871673   71783 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-180257 /var/lib/minikube/build/build.3934220034
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e0412c9e4915fac976e486e3f5992daa58c763bcc8c745ab9c374fe6381cefce done
#8 naming to localhost/my-image:functional-180257 done
#8 DONE 0.0s
I0918 19:55:37.564283   71783 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-180257 /var/lib/minikube/build/build.3934220034: (2.692583615s)
I0918 19:55:37.564363   71783 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3934220034
I0918 19:55:37.574299   71783 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3934220034.tar
I0918 19:55:37.583520   71783 build_images.go:217] Built localhost/my-image:functional-180257 from /tmp/build.3934220034.tar
I0918 19:55:37.583555   71783 build_images.go:133] succeeded building to: functional-180257
I0918 19:55:37.583562   71783 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.912130237s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-180257
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-180257 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.29.57 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-180257 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "325.235439ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "60.401286ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdany-port2260529591/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726689322278403617" to /tmp/TestFunctionalparallelMountCmdany-port2260529591/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726689322278403617" to /tmp/TestFunctionalparallelMountCmdany-port2260529591/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726689322278403617" to /tmp/TestFunctionalparallelMountCmdany-port2260529591/001/test-1726689322278403617
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.407818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:55:22.576162   14329 retry.go:31] will retry after 565.38201ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 19:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 19:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 19:55 test-1726689322278403617
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh cat /mount-9p/test-1726689322278403617
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-180257 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bf49b15c-7eb8-47fc-a1b1-3ad130e85585] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bf49b15c-7eb8-47fc-a1b1-3ad130e85585] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bf49b15c-7eb8-47fc-a1b1-3ad130e85585] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01453973s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-180257 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdany-port2260529591/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "360.63689ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.484076ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image load --daemon kicbase/echo-server:functional-180257 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image load --daemon kicbase/echo-server:functional-180257 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-180257
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image load --daemon kicbase/echo-server:functional-180257 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image save kicbase/echo-server:functional-180257 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image rm kicbase/echo-server:functional-180257 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-180257
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 image save --daemon kicbase/echo-server:functional-180257 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-180257
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-180257 docker-env) && out/minikube-linux-amd64 status -p functional-180257"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-180257 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdspecific-port1674879090/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.827303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:55:30.615992   14329 retry.go:31] will retry after 329.930984ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdspecific-port1674879090/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh "sudo umount -f /mount-9p": exit status 1 (280.00798ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-180257 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdspecific-port1674879090/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T" /mount1: exit status 1 (373.123472ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:55:32.416062   14329 retry.go:31] will retry after 520.95912ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180257 ssh "findmnt -T" /mount3
2024/09/18 19:55:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-180257 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup958046022/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-180257
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-180257
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-180257
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (102.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-926046 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0918 19:56:51.889190   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:51.895614   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:51.906981   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:51.928378   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:51.970609   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.052081   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.213569   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.535292   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:53.176661   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:54.458242   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:57.019897   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:02.141818   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:12.383765   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:32.866035   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-926046 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m41.852915811s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (102.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-926046 -- rollout status deployment/busybox: (4.522259573s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-t4cn7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xtzr5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xwlbm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-t4cn7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xtzr5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xwlbm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-t4cn7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xtzr5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xwlbm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-t4cn7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-t4cn7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xtzr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xtzr5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xwlbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-926046 -- exec busybox-7dff88458-xwlbm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-926046 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-926046 -v=7 --alsologtostderr: (19.161800454s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-926046 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp testdata/cp-test.txt ha-926046:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile384644534/001/cp-test_ha-926046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046:/home/docker/cp-test.txt ha-926046-m02:/home/docker/cp-test_ha-926046_ha-926046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test_ha-926046_ha-926046-m02.txt"
E0918 19:58:13.827876   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046:/home/docker/cp-test.txt ha-926046-m03:/home/docker/cp-test_ha-926046_ha-926046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test_ha-926046_ha-926046-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046:/home/docker/cp-test.txt ha-926046-m04:/home/docker/cp-test_ha-926046_ha-926046-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test_ha-926046_ha-926046-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp testdata/cp-test.txt ha-926046-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile384644534/001/cp-test_ha-926046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m02:/home/docker/cp-test.txt ha-926046:/home/docker/cp-test_ha-926046-m02_ha-926046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test_ha-926046-m02_ha-926046.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m02:/home/docker/cp-test.txt ha-926046-m03:/home/docker/cp-test_ha-926046-m02_ha-926046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test_ha-926046-m02_ha-926046-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m02:/home/docker/cp-test.txt ha-926046-m04:/home/docker/cp-test_ha-926046-m02_ha-926046-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test_ha-926046-m02_ha-926046-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp testdata/cp-test.txt ha-926046-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile384644534/001/cp-test_ha-926046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m03:/home/docker/cp-test.txt ha-926046:/home/docker/cp-test_ha-926046-m03_ha-926046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test_ha-926046-m03_ha-926046.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m03:/home/docker/cp-test.txt ha-926046-m02:/home/docker/cp-test_ha-926046-m03_ha-926046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test_ha-926046-m03_ha-926046-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m03:/home/docker/cp-test.txt ha-926046-m04:/home/docker/cp-test_ha-926046-m03_ha-926046-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test_ha-926046-m03_ha-926046-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp testdata/cp-test.txt ha-926046-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile384644534/001/cp-test_ha-926046-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m04:/home/docker/cp-test.txt ha-926046:/home/docker/cp-test_ha-926046-m04_ha-926046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046 "sudo cat /home/docker/cp-test_ha-926046-m04_ha-926046.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m04:/home/docker/cp-test.txt ha-926046-m02:/home/docker/cp-test_ha-926046-m04_ha-926046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m02 "sudo cat /home/docker/cp-test_ha-926046-m04_ha-926046-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 cp ha-926046-m04:/home/docker/cp-test.txt ha-926046-m03:/home/docker/cp-test_ha-926046-m04_ha-926046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 ssh -n ha-926046-m03 "sudo cat /home/docker/cp-test_ha-926046-m04_ha-926046-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-926046 node stop m02 -v=7 --alsologtostderr: (10.681045081s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr: exit status 7 (657.067938ms)

                                                
                                                
-- stdout --
	ha-926046
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926046-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926046-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:58:37.509721   99881 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:58:37.509829   99881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:58:37.509837   99881 out.go:358] Setting ErrFile to fd 2...
	I0918 19:58:37.509842   99881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:58:37.510019   99881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 19:58:37.510174   99881 out.go:352] Setting JSON to false
	I0918 19:58:37.510201   99881 mustload.go:65] Loading cluster: ha-926046
	I0918 19:58:37.510321   99881 notify.go:220] Checking for updates...
	I0918 19:58:37.510765   99881 config.go:182] Loaded profile config "ha-926046": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:58:37.510789   99881 status.go:174] checking status of ha-926046 ...
	I0918 19:58:37.511218   99881 cli_runner.go:164] Run: docker container inspect ha-926046 --format={{.State.Status}}
	I0918 19:58:37.528995   99881 status.go:364] ha-926046 host status = "Running" (err=<nil>)
	I0918 19:58:37.529027   99881 host.go:66] Checking if "ha-926046" exists ...
	I0918 19:58:37.529331   99881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926046
	I0918 19:58:37.549202   99881 host.go:66] Checking if "ha-926046" exists ...
	I0918 19:58:37.549612   99881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:58:37.549667   99881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926046
	I0918 19:58:37.568079   99881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/ha-926046/id_rsa Username:docker}
	I0918 19:58:37.665845   99881 ssh_runner.go:195] Run: systemctl --version
	I0918 19:58:37.669640   99881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:58:37.679637   99881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:58:37.728367   99881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-18 19:58:37.7186738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 19:58:37.729050   99881 kubeconfig.go:125] found "ha-926046" server: "https://192.168.49.254:8443"
	I0918 19:58:37.729084   99881 api_server.go:166] Checking apiserver status ...
	I0918 19:58:37.729133   99881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:58:37.741642   99881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2423/cgroup
	I0918 19:58:37.750852   99881 api_server.go:182] apiserver freezer: "9:freezer:/docker/641626af6791d8412ffd2fdc13cbda5f6fb70441fadda56da0a56590a61301b4/kubepods/burstable/pod0a06971a4209b3107734e6a692980b78/c5bc078d418d73c67699ff5c2a209389b3d9f0bd366e0314a78d2ad0c708bb12"
	I0918 19:58:37.750922   99881 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/641626af6791d8412ffd2fdc13cbda5f6fb70441fadda56da0a56590a61301b4/kubepods/burstable/pod0a06971a4209b3107734e6a692980b78/c5bc078d418d73c67699ff5c2a209389b3d9f0bd366e0314a78d2ad0c708bb12/freezer.state
	I0918 19:58:37.758936   99881 api_server.go:204] freezer state: "THAWED"
	I0918 19:58:37.758961   99881 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 19:58:37.763101   99881 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 19:58:37.763126   99881 status.go:456] ha-926046 apiserver status = Running (err=<nil>)
	I0918 19:58:37.763145   99881 status.go:176] ha-926046 status: &{Name:ha-926046 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:58:37.763169   99881 status.go:174] checking status of ha-926046-m02 ...
	I0918 19:58:37.763411   99881 cli_runner.go:164] Run: docker container inspect ha-926046-m02 --format={{.State.Status}}
	I0918 19:58:37.780458   99881 status.go:364] ha-926046-m02 host status = "Stopped" (err=<nil>)
	I0918 19:58:37.780483   99881 status.go:377] host is not running, skipping remaining checks
	I0918 19:58:37.780490   99881 status.go:176] ha-926046-m02 status: &{Name:ha-926046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:58:37.780522   99881 status.go:174] checking status of ha-926046-m03 ...
	I0918 19:58:37.780767   99881 cli_runner.go:164] Run: docker container inspect ha-926046-m03 --format={{.State.Status}}
	I0918 19:58:37.798184   99881 status.go:364] ha-926046-m03 host status = "Running" (err=<nil>)
	I0918 19:58:37.798209   99881 host.go:66] Checking if "ha-926046-m03" exists ...
	I0918 19:58:37.798451   99881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926046-m03
	I0918 19:58:37.815676   99881 host.go:66] Checking if "ha-926046-m03" exists ...
	I0918 19:58:37.815936   99881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:58:37.815975   99881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926046-m03
	I0918 19:58:37.834125   99881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/ha-926046-m03/id_rsa Username:docker}
	I0918 19:58:37.925921   99881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:58:37.936771   99881 kubeconfig.go:125] found "ha-926046" server: "https://192.168.49.254:8443"
	I0918 19:58:37.936803   99881 api_server.go:166] Checking apiserver status ...
	I0918 19:58:37.936847   99881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:58:37.947044   99881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2190/cgroup
	I0918 19:58:37.955824   99881 api_server.go:182] apiserver freezer: "9:freezer:/docker/781b2a279a291fb7e24608f66b8ea0b6d888f536451ef436e0829e5462012b9e/kubepods/burstable/pod0bcebcf91b8b277519cbbd8008c95122/b72824ae4da054b740627116bf2826991433f3587bdeeacfb42de150136d813d"
	I0918 19:58:37.955891   99881 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/781b2a279a291fb7e24608f66b8ea0b6d888f536451ef436e0829e5462012b9e/kubepods/burstable/pod0bcebcf91b8b277519cbbd8008c95122/b72824ae4da054b740627116bf2826991433f3587bdeeacfb42de150136d813d/freezer.state
	I0918 19:58:37.963445   99881 api_server.go:204] freezer state: "THAWED"
	I0918 19:58:37.963473   99881 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 19:58:37.967019   99881 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 19:58:37.967039   99881 status.go:456] ha-926046-m03 apiserver status = Running (err=<nil>)
	I0918 19:58:37.967046   99881 status.go:176] ha-926046-m03 status: &{Name:ha-926046-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:58:37.967061   99881 status.go:174] checking status of ha-926046-m04 ...
	I0918 19:58:37.967298   99881 cli_runner.go:164] Run: docker container inspect ha-926046-m04 --format={{.State.Status}}
	I0918 19:58:37.985699   99881 status.go:364] ha-926046-m04 host status = "Running" (err=<nil>)
	I0918 19:58:37.985720   99881 host.go:66] Checking if "ha-926046-m04" exists ...
	I0918 19:58:37.985993   99881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926046-m04
	I0918 19:58:38.002675   99881 host.go:66] Checking if "ha-926046-m04" exists ...
	I0918 19:58:38.002961   99881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:58:38.003012   99881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926046-m04
	I0918 19:58:38.020272   99881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/ha-926046-m04/id_rsa Username:docker}
	I0918 19:58:38.113763   99881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:58:38.124269   99881 status.go:176] ha-926046-m04 status: &{Name:ha-926046-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-926046 node start m02 -v=7 --alsologtostderr: (34.708037349s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-926046 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-926046 -v=7 --alsologtostderr
E0918 19:59:35.749557   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-926046 -v=7 --alsologtostderr: (33.556628486s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-926046 --wait=true -v=7 --alsologtostderr
E0918 20:00:08.170194   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.176617   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.187979   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.209343   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.250806   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.332253   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.493881   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:08.815574   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:09.457635   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:10.739207   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:13.301507   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:18.423364   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:28.665067   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:49.146845   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:30.109031   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:51.889107   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-926046 --wait=true -v=7 --alsologtostderr: (2m22.877715728s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-926046
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 node delete m03 -v=7 --alsologtostderr
E0918 20:02:19.590995   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-926046 node delete m03 -v=7 --alsologtostderr: (8.6655674s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 stop -v=7 --alsologtostderr
E0918 20:02:52.030900   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-926046 stop -v=7 --alsologtostderr: (32.467298651s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr: exit status 7 (99.555774ms)

                                                
                                                
-- stdout --
	ha-926046
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926046-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:02:54.394385  129535 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:02:54.394506  129535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:02:54.394512  129535 out.go:358] Setting ErrFile to fd 2...
	I0918 20:02:54.394517  129535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:02:54.394723  129535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 20:02:54.394918  129535 out.go:352] Setting JSON to false
	I0918 20:02:54.394948  129535 mustload.go:65] Loading cluster: ha-926046
	I0918 20:02:54.394989  129535 notify.go:220] Checking for updates...
	I0918 20:02:54.395385  129535 config.go:182] Loaded profile config "ha-926046": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:02:54.395406  129535 status.go:174] checking status of ha-926046 ...
	I0918 20:02:54.395849  129535 cli_runner.go:164] Run: docker container inspect ha-926046 --format={{.State.Status}}
	I0918 20:02:54.413698  129535 status.go:364] ha-926046 host status = "Stopped" (err=<nil>)
	I0918 20:02:54.413718  129535 status.go:377] host is not running, skipping remaining checks
	I0918 20:02:54.413724  129535 status.go:176] ha-926046 status: &{Name:ha-926046 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:02:54.413749  129535 status.go:174] checking status of ha-926046-m02 ...
	I0918 20:02:54.413992  129535 cli_runner.go:164] Run: docker container inspect ha-926046-m02 --format={{.State.Status}}
	I0918 20:02:54.431385  129535 status.go:364] ha-926046-m02 host status = "Stopped" (err=<nil>)
	I0918 20:02:54.431429  129535 status.go:377] host is not running, skipping remaining checks
	I0918 20:02:54.431439  129535 status.go:176] ha-926046-m02 status: &{Name:ha-926046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:02:54.431464  129535 status.go:174] checking status of ha-926046-m04 ...
	I0918 20:02:54.431727  129535 cli_runner.go:164] Run: docker container inspect ha-926046-m04 --format={{.State.Status}}
	I0918 20:02:54.450202  129535 status.go:364] ha-926046-m04 host status = "Stopped" (err=<nil>)
	I0918 20:02:54.450236  129535 status.go:377] host is not running, skipping remaining checks
	I0918 20:02:54.450242  129535 status.go:176] ha-926046-m04 status: &{Name:ha-926046-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-926046 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-926046 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.920856802s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-926046 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-926046 --control-plane -v=7 --alsologtostderr: (33.196470939s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-926046 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (25.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-093989 --driver=docker  --container-runtime=docker
E0918 20:05:08.170162   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-093989 --driver=docker  --container-runtime=docker: (25.046015149s)
--- PASS: TestImageBuild/serial/Setup (25.05s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-093989
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-093989: (2.559363013s)
--- PASS: TestImageBuild/serial/NormalBuild (2.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-093989
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-093989: (1.03264662s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-093989
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-093989
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-643467 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0918 20:05:35.872577   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-643467 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m9.167079399s)
--- PASS: TestJSONOutput/start/Command (69.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-643467 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-643467 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-643467 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-643467 --output=json --user=testUser: (5.706311532s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-727399 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-727399 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.865921ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1e2f0c9-d42f-42c4-aa02-f3ca304edddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-727399] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ff348ba-9e01-464a-850e-e0dfca0e5d97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"0a3be2d2-daf8-48c8-a1c7-e71da4d12cb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"430c493d-4768-4906-a781-9ce750dd3812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig"}}
	{"specversion":"1.0","id":"0d0e343b-cc46-4325-a385-8027c07b1746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube"}}
	{"specversion":"1.0","id":"e17587d2-85ce-4c25-b901-7367b0e9258b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a45ff305-0783-4133-aaed-e3a068e5ac6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a623858d-d361-4081-afc2-de4ac7952c17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-727399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-727399
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-841698 --network=
E0918 20:06:51.889916   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-841698 --network=: (24.753144121s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-841698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-841698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-841698: (2.061406069s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-022365 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-022365 --network=bridge: (22.149898729s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-022365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-022365
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-022365: (1.847557083s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.02s)

                                                
                                    
x
+
TestKicExistingNetwork (25.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0918 20:07:39.971297   14329 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0918 20:07:39.988215   14329 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0918 20:07:39.988291   14329 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0918 20:07:39.988308   14329 cli_runner.go:164] Run: docker network inspect existing-network
W0918 20:07:40.004151   14329 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0918 20:07:40.004179   14329 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0918 20:07:40.004191   14329 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0918 20:07:40.004301   14329 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 20:07:40.021653   14329 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c1dbce424831 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:90:02:73:e1} reservation:<nil>}
I0918 20:07:40.022089   14329 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013c15e0}
I0918 20:07:40.022115   14329 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0918 20:07:40.022153   14329 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0918 20:07:40.082379   14329 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-402009 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-402009 --network=existing-network: (23.834966893s)
helpers_test.go:175: Cleaning up "existing-network-402009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-402009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-402009: (1.825231868s)
I0918 20:08:05.758634   14329 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.80s)

                                                
                                    
x
+
TestKicCustomSubnet (26.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-976572 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-976572 --subnet=192.168.60.0/24: (24.341174337s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-976572 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-976572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-976572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-976572: (2.013718842s)
--- PASS: TestKicCustomSubnet (26.38s)

                                                
                                    
x
+
TestKicStaticIP (26.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-372822 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-372822 --static-ip=192.168.200.200: (24.041250921s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-372822 ip
helpers_test.go:175: Cleaning up "static-ip-372822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-372822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-372822: (1.941662153s)
--- PASS: TestKicStaticIP (26.10s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-762963 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-762963 --driver=docker  --container-runtime=docker: (20.597573555s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-776924 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-776924 --driver=docker  --container-runtime=docker: (24.012086118s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-762963
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-776924
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-776924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-776924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-776924: (2.020972226s)
helpers_test.go:175: Cleaning up "first-762963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-762963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-762963: (2.052017457s)
--- PASS: TestMinikubeProfile (49.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-658796 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-658796 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.69400225s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-658796 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-671022 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-671022 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.567583867s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-671022 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-658796 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-658796 --alsologtostderr -v=5: (1.461672013s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-671022 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-671022
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-671022: (1.171712428s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-671022
E0918 20:10:08.169558   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-671022: (7.875722628s)
--- PASS: TestMountStart/serial/RestartStopped (8.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-671022 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063138 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063138 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.48339109s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-063138 -- rollout status deployment/busybox: (3.295044733s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:34.013556   14329 retry.go:31] will retry after 1.361678136s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:35.486210   14329 retry.go:31] will retry after 1.601308077s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:37.196313   14329 retry.go:31] will retry after 2.876577172s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:40.178535   14329 retry.go:31] will retry after 3.824175359s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:44.112170   14329 retry.go:31] will retry after 7.119738516s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:11:51.341631   14329 retry.go:31] will retry after 10.610806877s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0918 20:11:51.889358   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:12:02.061497   14329 retry.go:31] will retry after 6.471672922s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-6qc5l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-blt7l -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-6qc5l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-blt7l -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-6qc5l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-blt7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-6qc5l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-6qc5l -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-blt7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063138 -- exec busybox-7dff88458-blt7l -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-063138 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-063138 -v 3 --alsologtostderr: (18.266136001s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-063138 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp testdata/cp-test.txt multinode-063138:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3963314680/001/cp-test_multinode-063138.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138:/home/docker/cp-test.txt multinode-063138-m02:/home/docker/cp-test_multinode-063138_multinode-063138-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test_multinode-063138_multinode-063138-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138:/home/docker/cp-test.txt multinode-063138-m03:/home/docker/cp-test_multinode-063138_multinode-063138-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test_multinode-063138_multinode-063138-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp testdata/cp-test.txt multinode-063138-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3963314680/001/cp-test_multinode-063138-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m02:/home/docker/cp-test.txt multinode-063138:/home/docker/cp-test_multinode-063138-m02_multinode-063138.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test_multinode-063138-m02_multinode-063138.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m02:/home/docker/cp-test.txt multinode-063138-m03:/home/docker/cp-test_multinode-063138-m02_multinode-063138-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test_multinode-063138-m02_multinode-063138-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp testdata/cp-test.txt multinode-063138-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3963314680/001/cp-test_multinode-063138-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m03:/home/docker/cp-test.txt multinode-063138:/home/docker/cp-test_multinode-063138-m03_multinode-063138.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138 "sudo cat /home/docker/cp-test_multinode-063138-m03_multinode-063138.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 cp multinode-063138-m03:/home/docker/cp-test.txt multinode-063138-m02:/home/docker/cp-test_multinode-063138-m03_multinode-063138-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 ssh -n multinode-063138-m02 "sudo cat /home/docker/cp-test_multinode-063138-m03_multinode-063138-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-063138 node stop m03: (1.173811627s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063138 status: exit status 7 (453.242318ms)

                                                
                                                
-- stdout --
	multinode-063138
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-063138-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-063138-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr: exit status 7 (467.568327ms)

                                                
                                                
-- stdout --
	multinode-063138
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-063138-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-063138-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:12:40.949688  216335 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:12:40.949810  216335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:12:40.949819  216335 out.go:358] Setting ErrFile to fd 2...
	I0918 20:12:40.949823  216335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:12:40.950018  216335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 20:12:40.950170  216335 out.go:352] Setting JSON to false
	I0918 20:12:40.950195  216335 mustload.go:65] Loading cluster: multinode-063138
	I0918 20:12:40.950338  216335 notify.go:220] Checking for updates...
	I0918 20:12:40.950580  216335 config.go:182] Loaded profile config "multinode-063138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:12:40.950598  216335 status.go:174] checking status of multinode-063138 ...
	I0918 20:12:40.951011  216335 cli_runner.go:164] Run: docker container inspect multinode-063138 --format={{.State.Status}}
	I0918 20:12:40.969360  216335 status.go:364] multinode-063138 host status = "Running" (err=<nil>)
	I0918 20:12:40.969382  216335 host.go:66] Checking if "multinode-063138" exists ...
	I0918 20:12:40.969620  216335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-063138
	I0918 20:12:40.987265  216335 host.go:66] Checking if "multinode-063138" exists ...
	I0918 20:12:40.987585  216335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:12:40.987644  216335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-063138
	I0918 20:12:41.005263  216335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/multinode-063138/id_rsa Username:docker}
	I0918 20:12:41.101859  216335 ssh_runner.go:195] Run: systemctl --version
	I0918 20:12:41.106479  216335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:12:41.117333  216335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:12:41.167410  216335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-18 20:12:41.15644277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0918 20:12:41.168117  216335 kubeconfig.go:125] found "multinode-063138" server: "https://192.168.67.2:8443"
	I0918 20:12:41.168158  216335 api_server.go:166] Checking apiserver status ...
	I0918 20:12:41.168201  216335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:12:41.179580  216335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2371/cgroup
	I0918 20:12:41.188424  216335 api_server.go:182] apiserver freezer: "9:freezer:/docker/85001cafd7ef9639c413489bb94de7a91a111897a980fd7bf41d13236cab5428/kubepods/burstable/pod518a01d6f7eba020571cbfaedf006b01/8e92125b2d6e271ece13c69ad67ce5bc7d0648bc235d72b23c3ca6bc49a2ae73"
	I0918 20:12:41.188494  216335 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/85001cafd7ef9639c413489bb94de7a91a111897a980fd7bf41d13236cab5428/kubepods/burstable/pod518a01d6f7eba020571cbfaedf006b01/8e92125b2d6e271ece13c69ad67ce5bc7d0648bc235d72b23c3ca6bc49a2ae73/freezer.state
	I0918 20:12:41.196380  216335 api_server.go:204] freezer state: "THAWED"
	I0918 20:12:41.196411  216335 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0918 20:12:41.200098  216335 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0918 20:12:41.200120  216335 status.go:456] multinode-063138 apiserver status = Running (err=<nil>)
	I0918 20:12:41.200129  216335 status.go:176] multinode-063138 status: &{Name:multinode-063138 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:12:41.200144  216335 status.go:174] checking status of multinode-063138-m02 ...
	I0918 20:12:41.200370  216335 cli_runner.go:164] Run: docker container inspect multinode-063138-m02 --format={{.State.Status}}
	I0918 20:12:41.217433  216335 status.go:364] multinode-063138-m02 host status = "Running" (err=<nil>)
	I0918 20:12:41.217461  216335 host.go:66] Checking if "multinode-063138-m02" exists ...
	I0918 20:12:41.217708  216335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-063138-m02
	I0918 20:12:41.234036  216335 host.go:66] Checking if "multinode-063138-m02" exists ...
	I0918 20:12:41.234277  216335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:12:41.234311  216335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-063138-m02
	I0918 20:12:41.251255  216335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19667-7499/.minikube/machines/multinode-063138-m02/id_rsa Username:docker}
	I0918 20:12:41.345718  216335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:12:41.356217  216335 status.go:176] multinode-063138-m02 status: &{Name:multinode-063138-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:12:41.356260  216335 status.go:174] checking status of multinode-063138-m03 ...
	I0918 20:12:41.356527  216335 cli_runner.go:164] Run: docker container inspect multinode-063138-m03 --format={{.State.Status}}
	I0918 20:12:41.373754  216335 status.go:364] multinode-063138-m03 host status = "Stopped" (err=<nil>)
	I0918 20:12:41.373777  216335 status.go:377] host is not running, skipping remaining checks
	I0918 20:12:41.373789  216335 status.go:176] multinode-063138-m03 status: &{Name:multinode-063138-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-063138 node start m03 -v=7 --alsologtostderr: (9.171019988s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063138
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-063138
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-063138: (22.299058582s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063138 --wait=true -v=8 --alsologtostderr
E0918 20:13:14.953230   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063138 --wait=true -v=8 --alsologtostderr: (1m14.114634884s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063138
--- PASS: TestMultiNode/serial/RestartKeepsNodes (96.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-063138 node delete m03: (4.610846124s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-063138 stop: (21.235726441s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063138 status: exit status 7 (81.858162ms)

                                                
                                                
-- stdout --
	multinode-063138
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-063138-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr: exit status 7 (81.775469ms)

                                                
                                                
-- stdout --
	multinode-063138
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-063138-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:14:54.243545  231647 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:14:54.243785  231647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:14:54.243794  231647 out.go:358] Setting ErrFile to fd 2...
	I0918 20:14:54.243798  231647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:14:54.243973  231647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7499/.minikube/bin
	I0918 20:14:54.244146  231647 out.go:352] Setting JSON to false
	I0918 20:14:54.244172  231647 mustload.go:65] Loading cluster: multinode-063138
	I0918 20:14:54.244224  231647 notify.go:220] Checking for updates...
	I0918 20:14:54.244570  231647 config.go:182] Loaded profile config "multinode-063138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:14:54.244586  231647 status.go:174] checking status of multinode-063138 ...
	I0918 20:14:54.245014  231647 cli_runner.go:164] Run: docker container inspect multinode-063138 --format={{.State.Status}}
	I0918 20:14:54.263127  231647 status.go:364] multinode-063138 host status = "Stopped" (err=<nil>)
	I0918 20:14:54.263173  231647 status.go:377] host is not running, skipping remaining checks
	I0918 20:14:54.263182  231647 status.go:176] multinode-063138 status: &{Name:multinode-063138 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:14:54.263213  231647 status.go:174] checking status of multinode-063138-m02 ...
	I0918 20:14:54.263572  231647 cli_runner.go:164] Run: docker container inspect multinode-063138-m02 --format={{.State.Status}}
	I0918 20:14:54.281689  231647 status.go:364] multinode-063138-m02 host status = "Stopped" (err=<nil>)
	I0918 20:14:54.281711  231647 status.go:377] host is not running, skipping remaining checks
	I0918 20:14:54.281719  231647 status.go:176] multinode-063138-m02 status: &{Name:multinode-063138-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063138 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0918 20:15:08.169423   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063138 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.127594221s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063138 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063138
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063138-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-063138-m02 --driver=docker  --container-runtime=docker: exit status 14 (63.20804ms)

                                                
                                                
-- stdout --
	* [multinode-063138-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-063138-m02' is duplicated with machine name 'multinode-063138-m02' in profile 'multinode-063138'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063138-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063138-m03 --driver=docker  --container-runtime=docker: (21.700745601s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-063138
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-063138: exit status 80 (280.737741ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-063138 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-063138-m03 already exists in multinode-063138-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-063138-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-063138-m03: (2.018279983s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                    
x
+
TestPreload (114.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-773198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0918 20:16:31.234091   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:16:51.891077   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-773198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (55.933678728s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-773198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-773198 image pull gcr.io/k8s-minikube/busybox: (2.123841982s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-773198
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-773198: (10.75703152s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-773198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-773198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (43.470505598s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-773198 image list
helpers_test.go:175: Cleaning up "test-preload-773198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-773198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-773198: (2.150753481s)
--- PASS: TestPreload (114.64s)

                                                
                                    
x
+
TestScheduledStopUnix (93.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-713782 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-713782 --memory=2048 --driver=docker  --container-runtime=docker: (20.690059606s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-713782 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-713782 -n scheduled-stop-713782
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-713782 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0918 20:18:33.681689   14329 retry.go:31] will retry after 71.627µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.682873   14329 retry.go:31] will retry after 144.515µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.683988   14329 retry.go:31] will retry after 196.947µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.685148   14329 retry.go:31] will retry after 281.955µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.686287   14329 retry.go:31] will retry after 698.361µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.687409   14329 retry.go:31] will retry after 593.116µs: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.688536   14329 retry.go:31] will retry after 1.152136ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.690752   14329 retry.go:31] will retry after 2.152024ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.693971   14329 retry.go:31] will retry after 2.384785ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.697167   14329 retry.go:31] will retry after 4.150757ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.702372   14329 retry.go:31] will retry after 4.438631ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.707601   14329 retry.go:31] will retry after 9.896724ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.718086   14329 retry.go:31] will retry after 12.670649ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.731348   14329 retry.go:31] will retry after 20.213593ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
I0918 20:18:33.752612   14329 retry.go:31] will retry after 38.393292ms: open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/scheduled-stop-713782/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-713782 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-713782 -n scheduled-stop-713782
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-713782
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-713782 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-713782
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-713782: exit status 7 (63.214588ms)

                                                
                                                
-- stdout --
	scheduled-stop-713782
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-713782 -n scheduled-stop-713782
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-713782 -n scheduled-stop-713782: exit status 7 (61.191074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-713782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-713782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-713782: (1.58828884s)
--- PASS: TestScheduledStopUnix (93.57s)

                                                
                                    
x
+
TestSkaffold (106.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2204320492 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-715264 --memory=2600 --driver=docker  --container-runtime=docker
E0918 20:20:08.170193   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-715264 --memory=2600 --driver=docker  --container-runtime=docker: (24.87733126s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2204320492 run --minikube-profile skaffold-715264 --kube-context skaffold-715264 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2204320492 run --minikube-profile skaffold-715264 --kube-context skaffold-715264 --status-check=true --port-forward=false --interactive=false: (1m5.02009375s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-86855f989-vvh7k" [c093d00a-525f-423a-8fcc-4ab63c79df67] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003506387s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6f64d8d54d-qp2r2" [dc632015-5018-42cb-bf03-10a845cee32f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003515685s
helpers_test.go:175: Cleaning up "skaffold-715264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-715264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-715264: (2.761740762s)
--- PASS: TestSkaffold (106.59s)

                                                
                                    
x
+
TestInsufficientStorage (12.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-420346 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-420346 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.77219955s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b05931f-ec7b-4776-b3c5-f91c50bf234c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-420346] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e19a6b1e-a4f9-4f88-9c72-ff2597bf3b50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"d2464af9-2119-4bef-8e7e-bb5f78957660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"22929ee5-fe16-4b4c-b1df-194f4634b7b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig"}}
	{"specversion":"1.0","id":"7ab62a5a-9ca1-450d-ad10-f62b049abedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube"}}
	{"specversion":"1.0","id":"87043241-1f09-4e92-9ff6-fdd2acf99914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"828f3c0a-82b8-4af3-b062-5d5b0b45cb9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"250b12a2-1c17-4a2e-9c23-43490f24d669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9655e30f-9e17-436a-8f09-60bdbda73dee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"126fef20-d6c8-4807-bb2e-252066f49b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0b81640-cc94-4873-8bc5-9bd25abc2f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b9f46600-018d-4bec-a226-41d2e0db54e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-420346\" primary control-plane node in \"insufficient-storage-420346\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b394f97-8b1f-4ce6-a173-5197b4f03d3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9eeb474-df0e-474e-b4ed-144c6cd7a8d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"110976a4-12d5-45b9-ba71-2fd23d07bfde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-420346 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-420346 --output=json --layout=cluster: exit status 7 (264.713136ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-420346","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-420346","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:21:43.792531  272421 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-420346" does not appear in /home/jenkins/minikube-integration/19667-7499/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-420346 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-420346 --output=json --layout=cluster: exit status 7 (257.3929ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-420346","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-420346","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:21:44.051621  272519 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-420346" does not appear in /home/jenkins/minikube-integration/19667-7499/kubeconfig
	E0918 20:21:44.061023  272519 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/insufficient-storage-420346/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-420346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-420346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-420346: (1.622406064s)
--- PASS: TestInsufficientStorage (12.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (101.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1513093557 start -p running-upgrade-827111 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1513093557 start -p running-upgrade-827111 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m11.619745394s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-827111 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-827111 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.838125264s)
helpers_test.go:175: Cleaning up "running-upgrade-827111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-827111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-827111: (2.768699997s)
--- PASS: TestRunningBinaryUpgrade (101.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (340.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.462115632s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-044649
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-044649: (10.647431474s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-044649 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-044649 status --format={{.Host}}: exit status 7 (65.788819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m27.925078415s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-044649 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (73.128448ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-044649] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-044649
	    minikube start -p kubernetes-upgrade-044649 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0446492 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-044649 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.37343513s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-044649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-044649
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-044649: (3.962225832s)
--- PASS: TestKubernetesUpgrade (340.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (98.8s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3927960569 start -p missing-upgrade-477519 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3927960569 start -p missing-upgrade-477519 --memory=2200 --driver=docker  --container-runtime=docker: (28.942747019s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-477519
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-477519: (10.457019564s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-477519
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-477519 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0918 20:25:08.169765   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-477519 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.804048533s)
helpers_test.go:175: Cleaning up "missing-upgrade-477519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-477519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-477519: (2.128632512s)
--- PASS: TestMissingContainerUpgrade (98.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (74.687546ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-268643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268643 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268643 --driver=docker  --container-runtime=docker: (30.473128958s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268643 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.655565934 start -p stopped-upgrade-300527 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.655565934 start -p stopped-upgrade-300527 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m49.047768839s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.655565934 -p stopped-upgrade-300527 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.655565934 -p stopped-upgrade-300527 stop: (11.064670607s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-300527 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-300527 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.053223891s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --driver=docker  --container-runtime=docker: (18.272217066s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268643 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-268643 status -o json: exit status 2 (321.92011ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-268643","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-268643
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-268643: (2.025897639s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268643 --no-kubernetes --driver=docker  --container-runtime=docker: (9.112016973s)
--- PASS: TestNoKubernetes/serial/Start (9.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268643 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268643 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.530435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-268643
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-268643: (1.180526097s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268643 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268643 --driver=docker  --container-runtime=docker: (8.47206668s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268643 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268643 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.714067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (38.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-633939 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-633939 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (38.975302904s)
--- PASS: TestPause/serial/Start (38.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-300527
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-300527: (1.169244472s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-633939 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-633939 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.410909644s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-633939 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-633939 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-633939 --output=json --layout=cluster: exit status 2 (284.972819ms)

                                                
                                                
-- stdout --
	{"Name":"pause-633939","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-633939","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-633939 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-633939 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-633939 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-633939 --alsologtostderr -v=5: (2.044689733s)
--- PASS: TestPause/serial/DeletePaused (2.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.253725757s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-633939
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-633939: exit status 1 (15.437385ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-633939: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (42.99671297s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.522448892s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-617796 "pgrep -a kubelet"
I0918 20:26:04.257761   14329 config.go:182] Loaded profile config "auto-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rzgpn" [865f3c1b-7574-47bc-a952-7cec2c75d5fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rzgpn" [865f3c1b-7574-47bc-a952-7cec2c75d5fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004172445s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0918 20:26:39.481101   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.45444976s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (73.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0918 20:26:51.888915   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:26:59.962628   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m13.48623184s)
--- PASS: TestNetworkPlugins/group/false/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dmtlp" [e9ac2cbf-d62a-40e5-ad29-3032f6f819b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003967857s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-617796 "pgrep -a kubelet"
I0918 20:27:07.290744   14329 config.go:182] Loaded profile config "kindnet-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m7w79" [94b27e02-5cea-41a3-90bc-13a62ce99060] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m7w79" [94b27e02-5cea-41a3-90bc-13a62ce99060] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004831673s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (38.750289999s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
E0918 20:27:40.924762   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "calico-node-xwnz4" [61310787-a196-4e85-8611-d26ace44c37d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004042354s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-617796 "pgrep -a kubelet"
I0918 20:27:47.188474   14329 config.go:182] Loaded profile config "calico-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tmc62" [c46e2a5b-c102-4fe2-94ab-6d42298b6778] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tmc62" [c46e2a5b-c102-4fe2-94ab-6d42298b6778] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004389729s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-617796 "pgrep -a kubelet"
I0918 20:28:05.316354   14329 config.go:182] Loaded profile config "false-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-56vf8" [8e1480e8-b7c9-414c-9029-4623a868e7a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-56vf8" [8e1480e8-b7c9-414c-9029-4623a868e7a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003971013s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (42.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (42.966532321s)
--- PASS: TestNetworkPlugins/group/flannel/Start (42.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-617796 "pgrep -a kubelet"
I0918 20:28:16.982875   14329 config.go:182] Loaded profile config "enable-default-cni-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4gc29" [cb31b261-53e3-449e-a9cd-214746a2c342] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4gc29" [cb31b261-53e3-449e-a9cd-214746a2c342] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004048962s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m15.77610483s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (37.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (37.904906797s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (37.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vqbcs" [022178ea-a4ef-4f84-b09e-20cd3019678a] Running
E0918 20:29:02.848694   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004800434s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-617796 "pgrep -a kubelet"
I0918 20:29:04.708639   14329 config.go:182] Loaded profile config "flannel-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gm82k" [7a2050be-ad9d-4ee1-9fd4-2f312e9d7dd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gm82k" [7a2050be-ad9d-4ee1-9fd4-2f312e9d7dd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004215332s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-617796 "pgrep -a kubelet"
I0918 20:29:27.689799   14329 config.go:182] Loaded profile config "kubenet-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9ht7w" [be6829e4-fc0b-468e-8744-8d68486de416] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9ht7w" [be6829e4-fc0b-468e-8744-8d68486de416] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003642179s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-617796 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (47.685252782s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-617796 "pgrep -a kubelet"
I0918 20:29:50.869900   14329 config.go:182] Loaded profile config "bridge-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mcdk4" [b15aafd4-6359-46dd-bffb-47d199dd24e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 20:29:54.955203   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mcdk4" [b15aafd4-6359-46dd-bffb-47d199dd24e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004708033s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-427176 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-427176 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m15.856922693s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-613707 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-613707 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m12.911481766s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-617796 "pgrep -a kubelet"
I0918 20:30:22.639229   14329 config.go:182] Loaded profile config "custom-flannel-617796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-617796 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ng6kb" [02c014b3-8c3a-4bde-9565-fd90686284a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ng6kb" [02c014b3-8c3a-4bde-9565-fd90686284a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.002847986s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-279224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-279224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.711315832s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-617796 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-617796 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)
E0918 20:34:33.020962   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:38.142372   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:39.115915   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:39.406731   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:44.862468   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:48.384110   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.090793   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.097197   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.108561   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.130797   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.172868   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.254357   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.415887   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:51.737602   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:52.379762   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:53.661434   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:56.223239   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:01.345055   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:08.170186   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:08.866132   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:11.586547   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:20.368744   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:22.841027   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:22.847462   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:22.858903   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:22.880358   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:22.921745   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:23.003236   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:23.164846   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:23.486550   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:24.128450   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:24.782174   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:25.410215   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:27.972274   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:32.068282   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/bridge-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:33.094632   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:43.336514   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-471717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-471717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m1.857001181s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-279224 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6203a51-1cbb-4f3e-b8d8-31014a8baba7] Pending
E0918 20:31:04.462486   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.468844   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.480351   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.501740   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.543328   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.624836   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:04.787161   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c6203a51-1cbb-4f3e-b8d8-31014a8baba7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0918 20:31:05.109278   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:05.750674   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:07.032797   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c6203a51-1cbb-4f3e-b8d8-31014a8baba7] Running
E0918 20:31:09.595000   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.002806058s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-279224 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-279224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0918 20:31:14.717182   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-279224 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-279224 --alsologtostderr -v=3
E0918 20:31:18.986772   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:24.959407   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-279224 --alsologtostderr -v=3: (10.751950894s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279224 -n embed-certs-279224
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279224 -n embed-certs-279224: exit status 7 (64.013263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-279224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-279224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-279224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.927452987s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279224 -n embed-certs-279224
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-613707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af191ab8-f44f-4aac-9379-4f196d041dd5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af191ab8-f44f-4aac-9379-4f196d041dd5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003238248s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-613707 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-613707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-613707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010009196s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-613707 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-613707 --alsologtostderr -v=3
E0918 20:31:45.441418   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:46.690328   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-613707 --alsologtostderr -v=3: (10.689995966s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-613707 -n no-preload-613707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-613707 -n no-preload-613707: exit status 7 (141.337486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-613707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-613707 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:31:51.888622   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-613707 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.475698319s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-613707 -n no-preload-613707
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-471717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [49f090e7-886e-4764-bd35-eb89c7cc8460] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0918 20:32:01.001157   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.007596   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.018991   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.040420   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.081850   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.163470   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.325395   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:01.647119   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:02.288573   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [49f090e7-886e-4764-bd35-eb89c7cc8460] Running
E0918 20:32:03.570593   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:06.132537   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00460851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-471717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-471717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-471717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-471717 --alsologtostderr -v=3
E0918 20:32:11.254210   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-471717 --alsologtostderr -v=3: (10.734356378s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-427176 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [841fa689-c7d6-41e7-9b15-ce5214f7bade] Pending
helpers_test.go:344: "busybox" [841fa689-c7d6-41e7-9b15-ce5214f7bade] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [841fa689-c7d6-41e7-9b15-ce5214f7bade] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003711634s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-427176 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717: exit status 7 (137.430789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-471717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-471717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:32:21.496486   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-471717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.552285494s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-427176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-427176 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-427176 --alsologtostderr -v=3
E0918 20:32:26.403454   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-427176 --alsologtostderr -v=3: (10.87062521s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-427176 -n old-k8s-version-427176
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-427176 -n old-k8s-version-427176: exit status 7 (100.079154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-427176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (23.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-427176 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0918 20:32:40.920724   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:40.927083   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:40.938699   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:40.960761   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:41.002655   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:41.084138   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:41.245770   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:41.568031   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:41.978203   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:42.210150   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:43.492118   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:46.053607   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:32:51.174962   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-427176 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (23.176532971s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-427176 -n old-k8s-version-427176
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (23.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (27.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0918 20:33:01.416631   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.550943   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.557304   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.568749   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.590158   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.631597   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.713538   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:05.875096   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:06.196829   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:06.838132   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:08.119750   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:10.681470   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:11.235839   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/functional-180257/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dwkxw" [2c897df3-4f03-47f0-ac33-3b0e3cf1c0ab] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0918 20:33:15.803277   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.178434   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.184857   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.196275   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.217729   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.259109   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.340529   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.502086   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:17.824099   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:18.465577   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dwkxw" [2c897df3-4f03-47f0-ac33-3b0e3cf1c0ab] Running
E0918 20:33:19.747716   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:21.898364   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:22.309044   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:22.940399   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kindnet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 27.003940522s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (27.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dwkxw" [2c897df3-4f03-47f0-ac33-3b0e3cf1c0ab] Running
E0918 20:33:26.045522   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:27.430506   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003892302s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-427176 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-427176 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-427176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-427176 -n old-k8s-version-427176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-427176 -n old-k8s-version-427176: exit status 2 (290.823167ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-427176 -n old-k8s-version-427176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-427176 -n old-k8s-version-427176: exit status 2 (291.571789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-427176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-427176 -n old-k8s-version-427176
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-427176 -n old-k8s-version-427176
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-209424 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:33:37.672596   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:46.527413   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:48.325507   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/auto-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.154212   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.428771   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.435203   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.446606   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.468729   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.510116   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.591757   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:58.753314   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:59.074907   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:33:59.716785   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:00.998758   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:02.860364   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/calico-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:03.559950   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-209424 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (27.930393388s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-209424 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-209424 --alsologtostderr -v=3
E0918 20:34:08.682960   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-209424 --alsologtostderr -v=3: (5.71329923s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-209424 -n newest-cni-209424
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-209424 -n newest-cni-209424: exit status 7 (120.81803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-209424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-209424 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:34:18.924871   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-209424 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.676917501s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-209424 -n newest-cni-209424
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-209424 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-209424 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-209424 -n newest-cni-209424
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-209424 -n newest-cni-209424: exit status 2 (292.682792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-209424 -n newest-cni-209424
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-209424 -n newest-cni-209424: exit status 2 (287.426147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-209424 --alsologtostderr -v=1
E0918 20:34:27.489684   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-209424 -n newest-cni-209424
E0918 20:34:27.889136   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:27.895513   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:27.906944   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:27.928560   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:27.969987   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:28.051443   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-209424 -n newest-cni-209424
E0918 20:34:28.213679   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r4459" [ca6583e1-a28b-4981-a0d4-cdebcebd2bce] Running
E0918 20:35:49.411812   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/false-617796/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:49.827518   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/kubenet-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003005441s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r4459" [ca6583e1-a28b-4981-a0d4-cdebcebd2bce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004386937s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-279224 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-279224 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-279224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279224 -n embed-certs-279224
E0918 20:36:01.037494   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/enable-default-cni-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279224 -n embed-certs-279224: exit status 2 (291.628491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-279224 -n embed-certs-279224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-279224 -n embed-certs-279224: exit status 2 (287.835724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-279224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279224 -n embed-certs-279224
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-279224 -n embed-certs-279224
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gctnn" [b96e3465-f87f-4750-bc50-65dcbcaa8c75] Running
E0918 20:36:18.986550   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/skaffold-715264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003658759s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gctnn" [b96e3465-f87f-4750-bc50-65dcbcaa8c75] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004328877s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-613707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-613707 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-613707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-613707 -n no-preload-613707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-613707 -n no-preload-613707: exit status 2 (283.917017ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-613707 -n no-preload-613707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-613707 -n no-preload-613707: exit status 2 (291.080516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-613707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-613707 -n no-preload-613707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-613707 -n no-preload-613707
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vz76k" [931e8bb7-fb65-4cf3-8954-c7682c17998d] Running
E0918 20:36:44.779145   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/custom-flannel-617796/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004221806s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vz76k" [931e8bb7-fb65-4cf3-8954-c7682c17998d] Running
E0918 20:36:50.355955   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/no-preload-613707/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:36:51.888563   14329 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7499/.minikube/profiles/addons-457129/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004334018s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-471717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-471717 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-471717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717: exit status 2 (288.294005ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717: exit status 2 (278.837347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-471717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-471717 -n default-k8s-diff-port-471717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.33s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-617796 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-617796" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-617796

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-617796" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617796"

                                                
                                                
----------------------- debugLogs end: cilium-617796 [took: 3.757008391s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-617796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-617796
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-822004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-822004
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard