Test Report: Docker_Linux 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.63
x
+
TestAddons/parallel/Registry (72.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.440974ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xnftt" [87171e43-6b56-423a-ac20-6b46a3583197] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002463391s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9ztsk" [de43c7a6-1992-4444-969d-d41949e06cdb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002763958s
addons_test.go:342: (dbg) Run:  kubectl --context addons-163060 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-163060 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-163060 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.077642698s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-163060 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-163060
helpers_test.go:235: (dbg) docker inspect addons-163060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3",
	        "Created": "2024-09-17T16:56:27.16599634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T16:56:27.297240995Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3/hosts",
	        "LogPath": "/var/lib/docker/containers/7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3/7a802f428c5e52f6745f399055a29e1c0f2dbe4f3db2c58b1b9fecaed240bff3-json.log",
	        "Name": "/addons-163060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-163060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-163060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b64b7857ceff83d215ea9bfa6f3ef683bfe63f9513851f53c908ce23e4bdb801-init/diff:/var/lib/docker/overlay2/03f685b8c3eedc410fe49fd5865e32dca92633e19bab382ce7cf454aa3c4e4e2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b64b7857ceff83d215ea9bfa6f3ef683bfe63f9513851f53c908ce23e4bdb801/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b64b7857ceff83d215ea9bfa6f3ef683bfe63f9513851f53c908ce23e4bdb801/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b64b7857ceff83d215ea9bfa6f3ef683bfe63f9513851f53c908ce23e4bdb801/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-163060",
	                "Source": "/var/lib/docker/volumes/addons-163060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-163060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-163060",
	                "name.minikube.sigs.k8s.io": "addons-163060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1baaaa9a9aefb5e6ad0de626340cf38f2a1dbaffaa513a1fa3fbe0b65e3c2f1c",
	            "SandboxKey": "/var/run/docker/netns/1baaaa9a9aef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-163060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e13fb16c4676d315fab78c48ec3fa5ecd124d207fc37bae5ebb0dd7c50aa3999",
	                    "EndpointID": "940717b186675aeb8ea59b0912119d652d86a69479c103cee94f6f00614b5331",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-163060",
	                        "7a802f428c5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-163060 -n addons-163060
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-967799                                                                   | download-docker-967799 | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-225195   | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | binary-mirror-225195                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45015                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-225195                                                                     | binary-mirror-225195   | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | addons-163060                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | addons-163060                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-163060 --wait=true                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:00 UTC | 17 Sep 24 17:00 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | addons-163060                                                                               |                        |         |         |                     |                     |
	| addons  | addons-163060 addons                                                                        | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-163060                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | addons-163060                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-163060                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-163060 ssh cat                                                                       | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | /opt/local-path-provisioner/pvc-6b40e24e-ff27-49e1-a0af-4a3320a2542e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-163060 ssh curl -s                                                                   | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-163060 ip                                                                            | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-163060 addons                                                                        | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-163060 addons                                                                        | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-163060 ip                                                                            | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-163060 addons disable                                                                | addons-163060          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:56:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:56:05.429771   20150 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:56:05.429871   20150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:05.429882   20150 out.go:358] Setting ErrFile to fd 2...
	I0917 16:56:05.429889   20150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:05.430090   20150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 16:56:05.430715   20150 out.go:352] Setting JSON to false
	I0917 16:56:05.431645   20150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2306,"bootTime":1726589859,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:56:05.431741   20150 start.go:139] virtualization: kvm guest
	I0917 16:56:05.433780   20150 out.go:177] * [addons-163060] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:56:05.435191   20150 notify.go:220] Checking for updates...
	I0917 16:56:05.435203   20150 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:56:05.436538   20150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:56:05.437862   20150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 16:56:05.439080   20150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	I0917 16:56:05.440324   20150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 16:56:05.441547   20150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:56:05.442990   20150 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:56:05.463903   20150 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:56:05.463988   20150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:56:05.508885   20150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 16:56:05.500245979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:56:05.508992   20150 docker.go:318] overlay module found
	I0917 16:56:05.510795   20150 out.go:177] * Using the docker driver based on user configuration
	I0917 16:56:05.511849   20150 start.go:297] selected driver: docker
	I0917 16:56:05.511863   20150 start.go:901] validating driver "docker" against <nil>
	I0917 16:56:05.511876   20150 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:56:05.512818   20150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:56:05.554582   20150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 16:56:05.546303107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:56:05.554745   20150 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:56:05.554985   20150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:56:05.556575   20150 out.go:177] * Using Docker driver with root privileges
	I0917 16:56:05.557944   20150 cni.go:84] Creating CNI manager for ""
	I0917 16:56:05.557994   20150 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:05.558005   20150 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:56:05.558058   20150 start.go:340] cluster config:
	{Name:addons-163060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-163060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:05.559264   20150 out.go:177] * Starting "addons-163060" primary control-plane node in "addons-163060" cluster
	I0917 16:56:05.560282   20150 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:56:05.561402   20150 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:56:05.562546   20150 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:05.562571   20150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 16:56:05.562588   20150 cache.go:56] Caching tarball of preloaded images
	I0917 16:56:05.562636   20150 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:56:05.562670   20150 preload.go:172] Found /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 16:56:05.562681   20150 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 16:56:05.563063   20150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/config.json ...
	I0917 16:56:05.563089   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/config.json: {Name:mkf8815af0780b232816101da29cb7accc3725b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:05.578629   20150 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:56:05.578739   20150 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:56:05.578761   20150 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:56:05.578768   20150 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:56:05.578780   20150 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:56:05.578789   20150 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0917 16:56:17.946938   20150 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0917 16:56:17.946992   20150 cache.go:194] Successfully downloaded all kic artifacts
	I0917 16:56:17.947029   20150 start.go:360] acquireMachinesLock for addons-163060: {Name:mk5177c6da83c393abee9e5f56591d5371bba180 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:56:17.947135   20150 start.go:364] duration metric: took 83.16µs to acquireMachinesLock for "addons-163060"
	I0917 16:56:17.947166   20150 start.go:93] Provisioning new machine with config: &{Name:addons-163060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-163060 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:17.947249   20150 start.go:125] createHost starting for "" (driver="docker")
	I0917 16:56:17.950307   20150 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 16:56:17.950564   20150 start.go:159] libmachine.API.Create for "addons-163060" (driver="docker")
	I0917 16:56:17.950622   20150 client.go:168] LocalClient.Create starting
	I0917 16:56:17.950722   20150 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem
	I0917 16:56:18.222813   20150 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/cert.pem
	I0917 16:56:18.282861   20150 cli_runner.go:164] Run: docker network inspect addons-163060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 16:56:18.298058   20150 cli_runner.go:211] docker network inspect addons-163060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 16:56:18.298118   20150 network_create.go:284] running [docker network inspect addons-163060] to gather additional debugging logs...
	I0917 16:56:18.298137   20150 cli_runner.go:164] Run: docker network inspect addons-163060
	W0917 16:56:18.312759   20150 cli_runner.go:211] docker network inspect addons-163060 returned with exit code 1
	I0917 16:56:18.312790   20150 network_create.go:287] error running [docker network inspect addons-163060]: docker network inspect addons-163060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-163060 not found
	I0917 16:56:18.312803   20150 network_create.go:289] output of [docker network inspect addons-163060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-163060 not found
	
	** /stderr **
	I0917 16:56:18.312905   20150 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 16:56:18.327849   20150 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001358570}
	I0917 16:56:18.327892   20150 network_create.go:124] attempt to create docker network addons-163060 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 16:56:18.327932   20150 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-163060 addons-163060
	I0917 16:56:18.385644   20150 network_create.go:108] docker network addons-163060 192.168.49.0/24 created
	I0917 16:56:18.385678   20150 kic.go:121] calculated static IP "192.168.49.2" for the "addons-163060" container
	I0917 16:56:18.385751   20150 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 16:56:18.399845   20150 cli_runner.go:164] Run: docker volume create addons-163060 --label name.minikube.sigs.k8s.io=addons-163060 --label created_by.minikube.sigs.k8s.io=true
	I0917 16:56:18.416457   20150 oci.go:103] Successfully created a docker volume addons-163060
	I0917 16:56:18.416541   20150 cli_runner.go:164] Run: docker run --rm --name addons-163060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-163060 --entrypoint /usr/bin/test -v addons-163060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0917 16:56:23.304261   20150 cli_runner.go:217] Completed: docker run --rm --name addons-163060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-163060 --entrypoint /usr/bin/test -v addons-163060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.887680591s)
	I0917 16:56:23.304302   20150 oci.go:107] Successfully prepared a docker volume addons-163060
	I0917 16:56:23.304322   20150 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:23.304347   20150 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 16:56:23.304415   20150 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-163060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 16:56:27.107561   20150 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-163060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.80310416s)
	I0917 16:56:27.107590   20150 kic.go:203] duration metric: took 3.803241274s to extract preloaded images to volume ...
	W0917 16:56:27.107695   20150 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 16:56:27.107777   20150 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 16:56:27.152206   20150 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-163060 --name addons-163060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-163060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-163060 --network addons-163060 --ip 192.168.49.2 --volume addons-163060:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0917 16:56:27.452182   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Running}}
	I0917 16:56:27.471401   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:27.489466   20150 cli_runner.go:164] Run: docker exec addons-163060 stat /var/lib/dpkg/alternatives/iptables
	I0917 16:56:27.528788   20150 oci.go:144] the created container "addons-163060" has a running status.
	I0917 16:56:27.528821   20150 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa...
	I0917 16:56:27.851877   20150 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 16:56:27.873995   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:27.898655   20150 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 16:56:27.898677   20150 kic_runner.go:114] Args: [docker exec --privileged addons-163060 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 16:56:27.957061   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:27.974688   20150 machine.go:93] provisionDockerMachine start ...
	I0917 16:56:27.974767   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:27.992628   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:27.992891   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:27.992912   20150 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 16:56:28.130251   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-163060
	
	I0917 16:56:28.130278   20150 ubuntu.go:169] provisioning hostname "addons-163060"
	I0917 16:56:28.130345   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:28.147498   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:28.147703   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:28.147720   20150 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-163060 && echo "addons-163060" | sudo tee /etc/hostname
	I0917 16:56:28.288715   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-163060
	
	I0917 16:56:28.288790   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:28.304385   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:28.304592   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:28.304612   20150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-163060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-163060/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-163060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:56:28.434664   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:28.434690   20150 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19662-12004/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-12004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-12004/.minikube}
	I0917 16:56:28.434730   20150 ubuntu.go:177] setting up certificates
	I0917 16:56:28.434744   20150 provision.go:84] configureAuth start
	I0917 16:56:28.434795   20150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-163060
	I0917 16:56:28.449572   20150 provision.go:143] copyHostCerts
	I0917 16:56:28.449660   20150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-12004/.minikube/ca.pem (1082 bytes)
	I0917 16:56:28.449778   20150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-12004/.minikube/cert.pem (1123 bytes)
	I0917 16:56:28.449854   20150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-12004/.minikube/key.pem (1679 bytes)
	I0917 16:56:28.449923   20150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-12004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca-key.pem org=jenkins.addons-163060 san=[127.0.0.1 192.168.49.2 addons-163060 localhost minikube]
	I0917 16:56:28.742193   20150 provision.go:177] copyRemoteCerts
	I0917 16:56:28.742249   20150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:56:28.742283   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:28.758516   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:28.851213   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 16:56:28.871585   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:56:28.891338   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 16:56:28.911541   20150 provision.go:87] duration metric: took 476.784156ms to configureAuth
	I0917 16:56:28.911571   20150 ubuntu.go:193] setting minikube options for container-runtime
	I0917 16:56:28.911740   20150 config.go:182] Loaded profile config "addons-163060": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:28.911796   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:28.927724   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:28.927917   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:28.927936   20150 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 16:56:29.059042   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 16:56:29.059071   20150 ubuntu.go:71] root file system type: overlay
	I0917 16:56:29.059199   20150 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 16:56:29.059269   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:29.075003   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:29.075168   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:29.075223   20150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 16:56:29.217003   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 16:56:29.217079   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:29.234540   20150 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:29.234744   20150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:29.234768   20150 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 16:56:29.890754   20150 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-17 16:56:29.211337128 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 16:56:29.890802   20150 machine.go:96] duration metric: took 1.916092594s to provisionDockerMachine
	I0917 16:56:29.890816   20150 client.go:171] duration metric: took 11.940183267s to LocalClient.Create
	I0917 16:56:29.890838   20150 start.go:167] duration metric: took 11.940274061s to libmachine.API.Create "addons-163060"
	I0917 16:56:29.890853   20150 start.go:293] postStartSetup for "addons-163060" (driver="docker")
	I0917 16:56:29.890866   20150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:29.890930   20150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:29.890990   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:29.907544   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:29.999136   20150 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:56:30.002070   20150 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 16:56:30.002095   20150 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 16:56:30.002111   20150 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 16:56:30.002118   20150 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 16:56:30.002130   20150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-12004/.minikube/addons for local assets ...
	I0917 16:56:30.002187   20150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-12004/.minikube/files for local assets ...
	I0917 16:56:30.002211   20150 start.go:296] duration metric: took 111.352738ms for postStartSetup
	I0917 16:56:30.002478   20150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-163060
	I0917 16:56:30.018671   20150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/config.json ...
	I0917 16:56:30.018906   20150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 16:56:30.018944   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:30.034855   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:30.123548   20150 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 16:56:30.127394   20150 start.go:128] duration metric: took 12.180131932s to createHost
	I0917 16:56:30.127420   20150 start.go:83] releasing machines lock for "addons-163060", held for 12.180267807s
	I0917 16:56:30.127483   20150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-163060
	I0917 16:56:30.143030   20150 ssh_runner.go:195] Run: cat /version.json
	I0917 16:56:30.143046   20150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 16:56:30.143072   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:30.143099   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:30.159564   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:30.159577   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:30.246233   20150 ssh_runner.go:195] Run: systemctl --version
	I0917 16:56:30.318995   20150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 16:56:30.323074   20150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 16:56:30.344528   20150 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 16:56:30.344604   20150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:30.368056   20150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 16:56:30.368079   20150 start.go:495] detecting cgroup driver to use...
	I0917 16:56:30.368105   20150 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:30.368193   20150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:30.381510   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 16:56:30.389611   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 16:56:30.397746   20150 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 16:56:30.397805   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 16:56:30.406011   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:30.414191   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 16:56:30.422091   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:30.430305   20150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:30.437856   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 16:56:30.445746   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 16:56:30.453400   20150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 16:56:30.461364   20150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:30.467904   20150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:30.474358   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:30.547009   20150 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 16:56:30.620508   20150 start.go:495] detecting cgroup driver to use...
	I0917 16:56:30.620554   20150 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:30.620611   20150 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 16:56:30.631834   20150 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0917 16:56:30.631894   20150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 16:56:30.642568   20150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:30.658145   20150 ssh_runner.go:195] Run: which cri-dockerd
	I0917 16:56:30.661310   20150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 16:56:30.669963   20150 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 16:56:30.686273   20150 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 16:56:30.779526   20150 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 16:56:30.869492   20150 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 16:56:30.869632   20150 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 16:56:30.885620   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:30.963831   20150 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 16:56:31.209579   20150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 16:56:31.219971   20150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:31.230249   20150 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 16:56:31.307371   20150 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 16:56:31.372523   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:31.444647   20150 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 16:56:31.456498   20150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:31.466266   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:31.536469   20150 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 16:56:31.595660   20150 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 16:56:31.595736   20150 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 16:56:31.598912   20150 start.go:563] Will wait 60s for crictl version
	I0917 16:56:31.598964   20150 ssh_runner.go:195] Run: which crictl
	I0917 16:56:31.602022   20150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:56:31.632611   20150 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 16:56:31.632669   20150 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:56:31.655252   20150 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:56:31.679568   20150 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 16:56:31.679647   20150 cli_runner.go:164] Run: docker network inspect addons-163060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 16:56:31.695986   20150 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:31.699375   20150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:31.709335   20150 kubeadm.go:883] updating cluster {Name:addons-163060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-163060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:31.709487   20150 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:31.709546   20150 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:56:31.727647   20150 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:56:31.727669   20150 docker.go:615] Images already preloaded, skipping extraction
	I0917 16:56:31.727713   20150 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:56:31.745244   20150 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:56:31.745271   20150 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:56:31.745282   20150 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0917 16:56:31.745388   20150 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-163060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-163060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:56:31.745455   20150 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 16:56:31.787082   20150 cni.go:84] Creating CNI manager for ""
	I0917 16:56:31.787108   20150 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:31.787120   20150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:31.787140   20150 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-163060 NodeName:addons-163060 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:31.787281   20150 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-163060"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:31.787336   20150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:31.795157   20150 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:56:31.795217   20150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:31.802487   20150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 16:56:31.817343   20150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:31.832712   20150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0917 16:56:31.848161   20150 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:31.851254   20150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:31.860684   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:31.942358   20150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:31.954317   20150 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060 for IP: 192.168.49.2
	I0917 16:56:31.954338   20150 certs.go:194] generating shared ca certs ...
	I0917 16:56:31.954357   20150 certs.go:226] acquiring lock for ca certs: {Name:mk4ca4c6226173ad89ccc5d68ab139f394e65c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:31.954477   20150 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-12004/.minikube/ca.key
	I0917 16:56:32.210257   20150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt ...
	I0917 16:56:32.210282   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt: {Name:mkcd85584212dd22809c5b18f4d8bf6f30c0f290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.210447   20150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-12004/.minikube/ca.key ...
	I0917 16:56:32.210457   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/ca.key: {Name:mkac9f869d50ca3b2cdbb23a9bdcaf276175f328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.210525   20150 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.key
	I0917 16:56:32.614297   20150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.crt ...
	I0917 16:56:32.614328   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.crt: {Name:mked56b68cea3b3d464890eada72309bc1972dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.614521   20150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.key ...
	I0917 16:56:32.614535   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.key: {Name:mk5bbfb3bcb7fa41ec180bd0390ac3aafc116b78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.614627   20150 certs.go:256] generating profile certs ...
	I0917 16:56:32.614696   20150 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.key
	I0917 16:56:32.614725   20150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt with IP's: []
	I0917 16:56:32.916445   20150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt ...
	I0917 16:56:32.916474   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: {Name:mk715bf18722721c21feb7cba118db233f5b44c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.916643   20150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.key ...
	I0917 16:56:32.916655   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.key: {Name:mk57b6fada5a9413c2083d5d507a5d4114b17dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:32.916726   20150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key.cbaf8388
	I0917 16:56:32.916744   20150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt.cbaf8388 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 16:56:33.026966   20150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt.cbaf8388 ...
	I0917 16:56:33.027011   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt.cbaf8388: {Name:mk7cbe5d81af6af0ed46a0d2d5badc08e21cddb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:33.027178   20150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key.cbaf8388 ...
	I0917 16:56:33.027191   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key.cbaf8388: {Name:mk558d2263b705bda7be53e33b015fab9d3bf695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:33.027260   20150 certs.go:381] copying /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt.cbaf8388 -> /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt
	I0917 16:56:33.027333   20150 certs.go:385] copying /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key.cbaf8388 -> /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key
	I0917 16:56:33.027375   20150 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.key
	I0917 16:56:33.027393   20150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.crt with IP's: []
	I0917 16:56:33.189625   20150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.crt ...
	I0917 16:56:33.189652   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.crt: {Name:mk20a5e30ba07c3e5cd538b38350eef35358ea2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:33.189801   20150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.key ...
	I0917 16:56:33.189811   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.key: {Name:mk951d9efe5308aefe8366c2fa40fbbfc872c9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:33.189970   20150 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 16:56:33.190004   20150 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/ca.pem (1082 bytes)
	I0917 16:56:33.190027   20150 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:33.190048   20150 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-12004/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:33.190647   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:33.212184   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:56:33.233677   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:33.254791   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 16:56:33.275287   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:56:33.295588   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:33.315968   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:33.336105   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 16:56:33.355999   20150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:33.376341   20150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:33.391105   20150 ssh_runner.go:195] Run: openssl version
	I0917 16:56:33.395774   20150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:33.403910   20150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:33.406963   20150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:33.407027   20150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:33.413750   20150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:33.422203   20150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:33.425364   20150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:33.425410   20150 kubeadm.go:392] StartCluster: {Name:addons-163060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-163060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:33.425534   20150 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 16:56:33.443333   20150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:33.452025   20150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:33.459386   20150 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 16:56:33.459445   20150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:33.466478   20150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:33.466492   20150 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:33.466522   20150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:33.473909   20150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:33.473960   20150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:33.480958   20150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:33.487882   20150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:33.487920   20150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:33.494930   20150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:33.502056   20150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:33.502098   20150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:33.508925   20150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:33.516081   20150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:33.516119   20150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:33.522923   20150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 16:56:33.556343   20150 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:33.556429   20150 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:33.574365   20150 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 16:56:33.574442   20150 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 16:56:33.574511   20150 kubeadm.go:310] OS: Linux
	I0917 16:56:33.574602   20150 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 16:56:33.574677   20150 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 16:56:33.574749   20150 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 16:56:33.574826   20150 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 16:56:33.574875   20150 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 16:56:33.574918   20150 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 16:56:33.574961   20150 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 16:56:33.575030   20150 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 16:56:33.575075   20150 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 16:56:33.622617   20150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:33.622757   20150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:33.622901   20150 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:33.632420   20150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:33.635235   20150 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:33.635344   20150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:33.635412   20150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:33.897665   20150 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:34.231585   20150 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:34.506446   20150 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:34.664847   20150 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:34.753535   20150 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:34.753659   20150 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-163060 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:56:35.125631   20150 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:35.125768   20150 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-163060 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:56:35.414806   20150 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:35.551977   20150 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:35.720509   20150 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:35.720586   20150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:35.961015   20150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:36.164219   20150 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:36.317872   20150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:36.552734   20150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:36.652410   20150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:36.652771   20150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:36.655149   20150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:36.657223   20150 out.go:235]   - Booting up control plane ...
	I0917 16:56:36.657359   20150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:36.657448   20150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:36.657505   20150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:36.665959   20150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:36.671265   20150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:36.671330   20150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:36.753740   20150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:36.753923   20150 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:37.755069   20150 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001343664s
	I0917 16:56:37.755200   20150 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:41.756540   20150 kubeadm.go:310] [api-check] The API server is healthy after 4.001525789s
	I0917 16:56:41.768130   20150 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:41.779673   20150 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:41.794250   20150 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:41.794526   20150 kubeadm.go:310] [mark-control-plane] Marking the node addons-163060 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:41.801108   20150 kubeadm.go:310] [bootstrap-token] Using token: hbcvk0.770ivrsaqb5vjd99
	I0917 16:56:41.802554   20150 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:41.802702   20150 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:41.805524   20150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:41.811609   20150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:41.814193   20150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:41.816587   20150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:41.818630   20150 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:42.162174   20150 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:42.588059   20150 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:43.161784   20150 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:43.163099   20150 kubeadm.go:310] 
	I0917 16:56:43.163258   20150 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:43.163270   20150 kubeadm.go:310] 
	I0917 16:56:43.163368   20150 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:43.163377   20150 kubeadm.go:310] 
	I0917 16:56:43.163413   20150 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:43.163507   20150 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:43.163584   20150 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:43.163592   20150 kubeadm.go:310] 
	I0917 16:56:43.163667   20150 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:43.163676   20150 kubeadm.go:310] 
	I0917 16:56:43.163754   20150 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:43.163763   20150 kubeadm.go:310] 
	I0917 16:56:43.163845   20150 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:43.163953   20150 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:43.164051   20150 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:43.164059   20150 kubeadm.go:310] 
	I0917 16:56:43.164186   20150 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:43.164300   20150 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:43.164311   20150 kubeadm.go:310] 
	I0917 16:56:43.164414   20150 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hbcvk0.770ivrsaqb5vjd99 \
	I0917 16:56:43.164564   20150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:39ed9eaf7eb785cbd3eb1cdde44a12cb8b03a8530d08ba6cf08757c75b478eb2 \
	I0917 16:56:43.164598   20150 kubeadm.go:310] 	--control-plane 
	I0917 16:56:43.164607   20150 kubeadm.go:310] 
	I0917 16:56:43.164714   20150 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:43.164722   20150 kubeadm.go:310] 
	I0917 16:56:43.164830   20150 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hbcvk0.770ivrsaqb5vjd99 \
	I0917 16:56:43.164965   20150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:39ed9eaf7eb785cbd3eb1cdde44a12cb8b03a8530d08ba6cf08757c75b478eb2 
	I0917 16:56:43.166763   20150 kubeadm.go:310] W0917 16:56:33.553909    1929 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:43.167195   20150 kubeadm.go:310] W0917 16:56:33.554517    1929 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:43.167545   20150 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 16:56:43.167728   20150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:43.167746   20150 cni.go:84] Creating CNI manager for ""
	I0917 16:56:43.167763   20150 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:43.169703   20150 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:43.171054   20150 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:43.179013   20150 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:43.194659   20150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:43.194734   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:43.194754   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-163060 minikube.k8s.io/updated_at=2024_09_17T16_56_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-163060 minikube.k8s.io/primary=true
	I0917 16:56:43.201390   20150 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:43.262928   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:43.763127   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:44.263092   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:44.763569   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:45.263637   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:45.763995   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:46.263767   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:46.763326   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:47.263598   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:47.763139   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:48.263102   20150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:48.358854   20150 kubeadm.go:1113] duration metric: took 5.164174214s to wait for elevateKubeSystemPrivileges
	I0917 16:56:48.358890   20150 kubeadm.go:394] duration metric: took 14.93348494s to StartCluster
	I0917 16:56:48.358908   20150 settings.go:142] acquiring lock: {Name:mkb8576b2f39f9923d5cc12f8cc85696a352bae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:48.359023   20150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 16:56:48.359360   20150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/kubeconfig: {Name:mk0b336c0df7435007d298d52b4ddaa46513b06b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:48.359522   20150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:48.359537   20150 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:48.359595   20150 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:48.359717   20150 addons.go:69] Setting yakd=true in profile "addons-163060"
	I0917 16:56:48.359737   20150 addons.go:234] Setting addon yakd=true in "addons-163060"
	I0917 16:56:48.359740   20150 addons.go:69] Setting gcp-auth=true in profile "addons-163060"
	I0917 16:56:48.359743   20150 config.go:182] Loaded profile config "addons-163060": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:48.359755   20150 addons.go:69] Setting ingress-dns=true in profile "addons-163060"
	I0917 16:56:48.359766   20150 mustload.go:65] Loading cluster: addons-163060
	I0917 16:56:48.359771   20150 addons.go:69] Setting helm-tiller=true in profile "addons-163060"
	I0917 16:56:48.359773   20150 addons.go:69] Setting storage-provisioner=true in profile "addons-163060"
	I0917 16:56:48.359785   20150 addons.go:234] Setting addon ingress-dns=true in "addons-163060"
	I0917 16:56:48.359789   20150 addons.go:69] Setting volcano=true in profile "addons-163060"
	I0917 16:56:48.359792   20150 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-163060"
	I0917 16:56:48.359800   20150 addons.go:69] Setting metrics-server=true in profile "addons-163060"
	I0917 16:56:48.359806   20150 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-163060"
	I0917 16:56:48.359810   20150 addons.go:69] Setting volumesnapshots=true in profile "addons-163060"
	I0917 16:56:48.359812   20150 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-163060"
	I0917 16:56:48.359815   20150 addons.go:234] Setting addon metrics-server=true in "addons-163060"
	I0917 16:56:48.359820   20150 addons.go:234] Setting addon volumesnapshots=true in "addons-163060"
	I0917 16:56:48.359826   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359825   20150 addons.go:69] Setting default-storageclass=true in profile "addons-163060"
	I0917 16:56:48.359836   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359841   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359840   20150 addons.go:69] Setting cloud-spanner=true in profile "addons-163060"
	I0917 16:56:48.359846   20150 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-163060"
	I0917 16:56:48.359781   20150 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-163060"
	I0917 16:56:48.359855   20150 addons.go:234] Setting addon cloud-spanner=true in "addons-163060"
	I0917 16:56:48.359860   20150 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-163060"
	I0917 16:56:48.359871   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359882   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359942   20150 config.go:182] Loaded profile config "addons-163060": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:48.359773   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359754   20150 addons.go:69] Setting ingress=true in profile "addons-163060"
	I0917 16:56:48.359985   20150 addons.go:234] Setting addon ingress=true in "addons-163060"
	I0917 16:56:48.360007   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.360216   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360257   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360357   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360370   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360386   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360410   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360426   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.360520   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.359733   20150 addons.go:69] Setting inspektor-gadget=true in profile "addons-163060"
	I0917 16:56:48.360606   20150 addons.go:234] Setting addon inspektor-gadget=true in "addons-163060"
	I0917 16:56:48.360675   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359790   20150 addons.go:234] Setting addon helm-tiller=true in "addons-163060"
	I0917 16:56:48.360918   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.361130   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.361305   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.359802   20150 addons.go:234] Setting addon volcano=true in "addons-163060"
	I0917 16:56:48.361345   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359830   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359845   20150 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-163060"
	I0917 16:56:48.361422   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.359802   20150 addons.go:69] Setting registry=true in profile "addons-163060"
	I0917 16:56:48.361817   20150 addons.go:234] Setting addon registry=true in "addons-163060"
	I0917 16:56:48.361848   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.359794   20150 addons.go:234] Setting addon storage-provisioner=true in "addons-163060"
	I0917 16:56:48.362156   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.363818   20150 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:48.365228   20150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:48.379628   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.379628   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.380081   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.380470   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.380951   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.393958   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.408574   20150 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-163060"
	I0917 16:56:48.408624   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.409176   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.414622   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:48.414757   20150 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:48.416323   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:48.416343   20150 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:48.416412   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.416694   20150 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:48.416707   20150 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:48.416769   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.443127   20150 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:48.443127   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:48.447260   20150 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:48.447280   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:48.447336   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.455044   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:48.457009   20150 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:56:48.458444   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:48.458769   20150 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:48.458786   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:56:48.458852   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.461013   20150 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:48.461124   20150 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:48.461169   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:48.462311   20150 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:48.462324   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:48.462374   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.462829   20150 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:48.464049   20150 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:56:48.464132   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:48.465248   20150 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:48.466045   20150 addons.go:234] Setting addon default-storageclass=true in "addons-163060"
	I0917 16:56:48.466081   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:48.466869   20150 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:48.466885   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:56:48.467071   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.467373   20150 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:48.467386   20150 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:48.467429   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.467731   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:48.469342   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:48.470124   20150 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 16:56:48.472364   20150 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 16:56:48.472475   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:48.474568   20150 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:48.474676   20150 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 16:56:48.474893   20150 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:56:48.477537   20150 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:48.477556   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:56:48.477606   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.478556   20150 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:48.478576   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 16:56:48.478626   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.478791   20150 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:48.478939   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:48.478951   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:48.479029   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.481017   20150 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:48.482648   20150 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:48.482664   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:48.482717   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.484518   20150 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:48.485861   20150 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:48.485980   20150 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:48.487031   20150 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:48.487048   20150 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:48.487104   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.487232   20150 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:48.487362   20150 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:48.487372   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:48.487411   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.487437   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.488485   20150 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:48.488498   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:48.488535   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.508389   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.517938   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.518882   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.531973   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.536535   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.536943   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.539073   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.539266   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.540561   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.541334   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.541896   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.542667   20150 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:48.542685   20150 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:48.542729   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:48.544346   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.546554   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	W0917 16:56:48.557913   20150 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 16:56:48.557944   20150 retry.go:31] will retry after 305.266352ms: ssh: handshake failed: EOF
	W0917 16:56:48.563427   20150 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 16:56:48.563460   20150 retry.go:31] will retry after 231.410259ms: ssh: handshake failed: EOF
	I0917 16:56:48.570171   20150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:48.570385   20150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:48.579328   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:48.959141   20150 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:48.959184   20150 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:48.963403   20150 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:48.963428   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:48.970601   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:49.054996   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:49.064512   20150 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:49.064558   20150 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:56:49.064783   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:49.064836   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:49.071741   20150 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:49.071764   20150 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:49.162242   20150 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:49.162273   20150 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:49.168551   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:49.255819   20150 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:49.255846   20150 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:49.256258   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:49.261469   20150 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:49.261545   20150 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:49.262759   20150 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:49.262808   20150 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:49.347632   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:49.347676   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:49.351393   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:49.353648   20150 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:49.353666   20150 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:56:49.450098   20150 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:49.450202   20150 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:49.453620   20150 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:49.453700   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:49.455735   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:49.551832   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:49.556307   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:49.556339   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:49.648680   20150 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:49.648713   20150 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:49.654532   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:49.756357   20150 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:49.756442   20150 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:49.759199   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:49.848389   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:49.959405   20150 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.38899137s)
	I0917 16:56:49.959496   20150 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 16:56:49.959674   20150 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.389476446s)
	I0917 16:56:49.961771   20150 node_ready.go:35] waiting up to 6m0s for node "addons-163060" to be "Ready" ...
	I0917 16:56:49.962388   20150 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:49.962450   20150 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:49.962605   20150 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:49.962640   20150 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:49.965946   20150 node_ready.go:49] node "addons-163060" has status "Ready":"True"
	I0917 16:56:49.965969   20150 node_ready.go:38] duration metric: took 4.054358ms for node "addons-163060" to be "Ready" ...
	I0917 16:56:49.965981   20150 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:49.976858   20150 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f8spg" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.056415   20150 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:50.056443   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:50.255693   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:50.255718   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:50.453239   20150 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:50.453323   20150 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:50.464471   20150 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-163060" context rescaled to 1 replicas
	I0917 16:56:50.549453   20150 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:50.549531   20150 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:50.551041   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.580401766s)
	I0917 16:56:50.654594   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:50.669594   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:50.669621   20150 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:50.750105   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:51.054042   20150 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:51.054067   20150 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:51.056758   20150 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:51.056783   20150 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:51.657322   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:51.657398   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:51.672350   20150 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:51.672429   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:51.850850   20150 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:51.851015   20150 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:52.052032   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:52.052120   20150 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:52.062254   20150 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8spg" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:52.251758   20150 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:52.251789   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:52.259783   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.204752477s)
	I0917 16:56:52.259753   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.091096436s)
	I0917 16:56:52.368210   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:52.449313   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:52.449361   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:52.948868   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:52.948909   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:52.951058   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:53.351553   20150 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:53.351585   20150 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:54.149974   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:54.553681   20150 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8spg" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:55.062302   20150 pod_ready.go:93] pod "coredns-7c65d6cfc9-f8spg" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:55.062337   20150 pod_ready.go:82] duration metric: took 5.085453472s for pod "coredns-7c65d6cfc9-f8spg" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:55.062350   20150 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:55.453246   20150 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:55.453337   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:55.485414   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:56.257437   20150 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:56.469795   20150 addons.go:234] Setting addon gcp-auth=true in "addons-163060"
	I0917 16:56:56.469860   20150 host.go:66] Checking if "addons-163060" exists ...
	I0917 16:56:56.470377   20150 cli_runner.go:164] Run: docker container inspect addons-163060 --format={{.State.Status}}
	I0917 16:56:56.487308   20150 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:56.487363   20150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-163060
	I0917 16:56:56.503000   20150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/addons-163060/id_rsa Username:docker}
	I0917 16:56:57.069194   20150 pod_ready.go:103] pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:59.072816   20150 pod_ready.go:103] pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:00.451389   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.0999625s)
	I0917 16:57:00.451311   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.195013128s)
	I0917 16:57:00.451503   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.995685384s)
	I0917 16:57:00.451870   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.900008105s)
	I0917 16:57:00.451923   20150 addons.go:475] Verifying addon ingress=true in "addons-163060"
	I0917 16:57:00.452212   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.797598726s)
	I0917 16:57:00.452303   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.693042573s)
	I0917 16:57:00.452354   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.603883056s)
	I0917 16:57:00.452907   20150 addons.go:475] Verifying addon registry=true in "addons-163060"
	I0917 16:57:00.452401   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.797721945s)
	I0917 16:57:00.452472   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.702336214s)
	I0917 16:57:00.453154   20150 addons.go:475] Verifying addon metrics-server=true in "addons-163060"
	I0917 16:57:00.452576   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.084333174s)
	W0917 16:57:00.453199   20150 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:57:00.453214   20150 retry.go:31] will retry after 309.290798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:57:00.452651   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.501565854s)
	I0917 16:57:00.455019   20150 out.go:177] * Verifying ingress addon...
	I0917 16:57:00.455022   20150 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-163060 service yakd-dashboard -n yakd-dashboard
	
	I0917 16:57:00.456141   20150 out.go:177] * Verifying registry addon...
	I0917 16:57:00.461253   20150 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:57:00.461256   20150 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 16:57:00.467306   20150 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:57:00.467332   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.467760   20150 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 16:57:00.467783   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.663779   20150 pod_ready.go:98] pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:57:00 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-17 16:56:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:56:50 +0000 UTC,FinishedAt:2024-09-17 16:56:58 +0000 UTC,ContainerID:docker://6adc028f05fabe97df4091159f08bb34f70f3dceacb1c09e4a49b77933cd5f3e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://6adc028f05fabe97df4091159f08bb34f70f3dceacb1c09e4a49b77933cd5f3e Started:0xc00208a500 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001abff60} {Name:kube-api-access-ff2pg MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc001abff70}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:57:00.663909   20150 pod_ready.go:82] duration metric: took 5.601549044s for pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace to be "Ready" ...
	E0917 16:57:00.663939   20150 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-k87c9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:57:00 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-17 16:56:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:56:50 +0000 UTC,FinishedAt:2024-09-17 16:56:58 +0000 UTC,ContainerID:docker://6adc028f05fabe97df4091159f08bb34f70f3dceacb1c09e4a49b77933cd5f3e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://6adc028f05fabe97df4091159f08bb34f70f3dceacb1c09e4a49b77933cd5f3e Started:0xc00208a500 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001abff60} {Name:kube-api-access-ff2pg MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001abff70}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:57:00.663960   20150 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.756779   20150 pod_ready.go:93] pod "etcd-addons-163060" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:00.756867   20150 pod_ready.go:82] duration metric: took 92.885407ms for pod "etcd-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.756897   20150 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.763598   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:57:00.852480   20150 pod_ready.go:93] pod "kube-apiserver-addons-163060" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:00.852571   20150 pod_ready.go:82] duration metric: took 95.653344ms for pod "kube-apiserver-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.852600   20150 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.859572   20150 pod_ready.go:93] pod "kube-controller-manager-addons-163060" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:00.859648   20150 pod_ready.go:82] duration metric: took 7.028464ms for pod "kube-controller-manager-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.859674   20150 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xj99" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.867948   20150 pod_ready.go:93] pod "kube-proxy-9xj99" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:00.867980   20150 pod_ready.go:82] duration metric: took 8.287251ms for pod "kube-proxy-9xj99" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.867993   20150 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.965323   20150 pod_ready.go:93] pod "kube-scheduler-addons-163060" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:00.965365   20150 pod_ready.go:82] duration metric: took 97.361861ms for pod "kube-scheduler-addons-163060" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:00.965375   20150 pod_ready.go:39] duration metric: took 10.999334441s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:57:00.965401   20150 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:57:00.965506   20150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:57:00.965633   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.967443   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.468095   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.469325   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.654093   20150 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.166744622s)
	I0917 16:57:01.654091   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.503844666s)
	I0917 16:57:01.654324   20150 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-163060"
	I0917 16:57:01.656939   20150 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:57:01.656954   20150 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:57:01.658727   20150 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:57:01.659590   20150 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:57:01.660953   20150 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:57:01.661003   20150 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:57:01.666058   20150 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:57:01.666124   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.761915   20150 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:57:01.761998   20150 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:57:01.856831   20150 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:57:01.856858   20150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:57:01.947822   20150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:57:01.965625   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.967455   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.166119   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.466769   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.466831   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.667023   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.966195   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.966903   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.166367   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.451791   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.688080518s)
	I0917 16:57:03.451819   20150 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.486276073s)
	I0917 16:57:03.451903   20150 api_server.go:72] duration metric: took 15.092335753s to wait for apiserver process to appear ...
	I0917 16:57:03.451908   20150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.504007862s)
	I0917 16:57:03.451915   20150 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:57:03.452162   20150 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 16:57:03.453466   20150 addons.go:475] Verifying addon gcp-auth=true in "addons-163060"
	I0917 16:57:03.455394   20150 out.go:177] * Verifying gcp-auth addon...
	I0917 16:57:03.457639   20150 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:57:03.458055   20150 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 16:57:03.458819   20150 api_server.go:141] control plane version: v1.31.1
	I0917 16:57:03.458841   20150 api_server.go:131] duration metric: took 6.70063ms to wait for apiserver health ...
	I0917 16:57:03.458851   20150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:57:03.459717   20150 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:57:03.464302   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.464536   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.467502   20150 system_pods.go:59] 18 kube-system pods found
	I0917 16:57:03.467542   20150 system_pods.go:61] "coredns-7c65d6cfc9-f8spg" [e61f3b7d-af39-440c-a80e-dc94ddb90c07] Running
	I0917 16:57:03.467560   20150 system_pods.go:61] "csi-hostpath-attacher-0" [b2f35197-efe2-4181-ad9d-4fc3a1c5cb47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:57:03.467572   20150 system_pods.go:61] "csi-hostpath-resizer-0" [48a87fd7-c4de-4b57-b089-3d3cd4d802ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:57:03.467587   20150 system_pods.go:61] "csi-hostpathplugin-kfwr4" [4dad6b37-811d-40dd-81e3-f10b2e9d00b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:57:03.467597   20150 system_pods.go:61] "etcd-addons-163060" [ab42a568-f4f1-4a0d-964d-030dba5c9eef] Running
	I0917 16:57:03.467604   20150 system_pods.go:61] "kube-apiserver-addons-163060" [e849fea6-23da-4f64-a864-d5e83d161f5f] Running
	I0917 16:57:03.467612   20150 system_pods.go:61] "kube-controller-manager-addons-163060" [66594d00-9610-4b6f-861f-4e049879121e] Running
	I0917 16:57:03.467620   20150 system_pods.go:61] "kube-ingress-dns-minikube" [315f956b-f0bb-433b-9cc2-55581bcebdd4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 16:57:03.467628   20150 system_pods.go:61] "kube-proxy-9xj99" [94e38d82-f813-4c73-ad0f-2b1d5bfd1a97] Running
	I0917 16:57:03.467636   20150 system_pods.go:61] "kube-scheduler-addons-163060" [1a1eaf4b-4adc-4a9f-bcf3-2f8a56738f62] Running
	I0917 16:57:03.467646   20150 system_pods.go:61] "metrics-server-84c5f94fbc-2f2f2" [03a25efb-5c8d-4637-b228-6bb67ccb601f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 16:57:03.467658   20150 system_pods.go:61] "nvidia-device-plugin-daemonset-fvg2d" [69980d79-6040-46a6-92e4-f154f528e261] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 16:57:03.467676   20150 system_pods.go:61] "registry-66c9cd494c-xnftt" [87171e43-6b56-423a-ac20-6b46a3583197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 16:57:03.467689   20150 system_pods.go:61] "registry-proxy-9ztsk" [de43c7a6-1992-4444-969d-d41949e06cdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 16:57:03.467700   20150 system_pods.go:61] "snapshot-controller-56fcc65765-fzkt2" [db3a7532-a6e9-481f-9449-e5e5f81fb4db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:57:03.467711   20150 system_pods.go:61] "snapshot-controller-56fcc65765-t7gvk" [6e3f0826-ee2e-4a50-9797-2c8c8954b6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:57:03.467717   20150 system_pods.go:61] "storage-provisioner" [bbad410c-6bb7-404e-bfd5-cb7d0e8f806c] Running
	I0917 16:57:03.467727   20150 system_pods.go:61] "tiller-deploy-b48cc5f79-qd92x" [6de636f4-5713-4439-9d76-756777a66ef2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 16:57:03.467735   20150 system_pods.go:74] duration metric: took 8.877494ms to wait for pod list to return data ...
	I0917 16:57:03.467747   20150 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:57:03.469710   20150 default_sa.go:45] found service account: "default"
	I0917 16:57:03.469726   20150 default_sa.go:55] duration metric: took 1.973525ms for default service account to be created ...
	I0917 16:57:03.469735   20150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:57:03.481005   20150 system_pods.go:86] 18 kube-system pods found
	I0917 16:57:03.481031   20150 system_pods.go:89] "coredns-7c65d6cfc9-f8spg" [e61f3b7d-af39-440c-a80e-dc94ddb90c07] Running
	I0917 16:57:03.481040   20150 system_pods.go:89] "csi-hostpath-attacher-0" [b2f35197-efe2-4181-ad9d-4fc3a1c5cb47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:57:03.481046   20150 system_pods.go:89] "csi-hostpath-resizer-0" [48a87fd7-c4de-4b57-b089-3d3cd4d802ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:57:03.481057   20150 system_pods.go:89] "csi-hostpathplugin-kfwr4" [4dad6b37-811d-40dd-81e3-f10b2e9d00b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:57:03.481065   20150 system_pods.go:89] "etcd-addons-163060" [ab42a568-f4f1-4a0d-964d-030dba5c9eef] Running
	I0917 16:57:03.481069   20150 system_pods.go:89] "kube-apiserver-addons-163060" [e849fea6-23da-4f64-a864-d5e83d161f5f] Running
	I0917 16:57:03.481073   20150 system_pods.go:89] "kube-controller-manager-addons-163060" [66594d00-9610-4b6f-861f-4e049879121e] Running
	I0917 16:57:03.481082   20150 system_pods.go:89] "kube-ingress-dns-minikube" [315f956b-f0bb-433b-9cc2-55581bcebdd4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 16:57:03.481089   20150 system_pods.go:89] "kube-proxy-9xj99" [94e38d82-f813-4c73-ad0f-2b1d5bfd1a97] Running
	I0917 16:57:03.481094   20150 system_pods.go:89] "kube-scheduler-addons-163060" [1a1eaf4b-4adc-4a9f-bcf3-2f8a56738f62] Running
	I0917 16:57:03.481102   20150 system_pods.go:89] "metrics-server-84c5f94fbc-2f2f2" [03a25efb-5c8d-4637-b228-6bb67ccb601f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 16:57:03.481108   20150 system_pods.go:89] "nvidia-device-plugin-daemonset-fvg2d" [69980d79-6040-46a6-92e4-f154f528e261] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 16:57:03.481125   20150 system_pods.go:89] "registry-66c9cd494c-xnftt" [87171e43-6b56-423a-ac20-6b46a3583197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 16:57:03.481133   20150 system_pods.go:89] "registry-proxy-9ztsk" [de43c7a6-1992-4444-969d-d41949e06cdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 16:57:03.481138   20150 system_pods.go:89] "snapshot-controller-56fcc65765-fzkt2" [db3a7532-a6e9-481f-9449-e5e5f81fb4db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:57:03.481147   20150 system_pods.go:89] "snapshot-controller-56fcc65765-t7gvk" [6e3f0826-ee2e-4a50-9797-2c8c8954b6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:57:03.481155   20150 system_pods.go:89] "storage-provisioner" [bbad410c-6bb7-404e-bfd5-cb7d0e8f806c] Running
	I0917 16:57:03.481162   20150 system_pods.go:89] "tiller-deploy-b48cc5f79-qd92x" [6de636f4-5713-4439-9d76-756777a66ef2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 16:57:03.481168   20150 system_pods.go:126] duration metric: took 11.428374ms to wait for k8s-apps to be running ...
	I0917 16:57:03.481177   20150 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:57:03.481215   20150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:57:03.494547   20150 system_svc.go:56] duration metric: took 13.362249ms WaitForService to wait for kubelet
	I0917 16:57:03.494572   20150 kubeadm.go:582] duration metric: took 15.135006283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:57:03.494592   20150 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:57:03.497623   20150 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 16:57:03.497657   20150 node_conditions.go:123] node cpu capacity is 8
	I0917 16:57:03.497672   20150 node_conditions.go:105] duration metric: took 3.074367ms to run NodePressure ...
	I0917 16:57:03.497686   20150 start.go:241] waiting for startup goroutines ...
	I0917 16:57:03.664820   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.964200   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.964640   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.164643   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.464649   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.464884   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.664206   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.964808   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.964971   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.164559   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.464466   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.465556   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.663902   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.965020   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.965359   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.164234   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.465288   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.465637   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.663803   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.964941   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.965408   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.164286   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.464991   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.465939   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.664542   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.964303   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.964373   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.165598   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.463971   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.464252   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.663468   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.964405   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.964976   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.164239   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.465338   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.465532   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.663654   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.964213   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.964530   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.163564   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.465044   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.465231   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.663045   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.964019   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.964243   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.164544   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.464895   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.465260   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.664171   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.964755   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.966233   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.163953   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.464959   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.466455   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.664166   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.964029   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.964351   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.163756   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.464211   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.464319   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.663919   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.965040   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.965278   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.163793   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.464800   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.464901   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.663937   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.963975   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.964105   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.163126   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.465034   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.465640   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.664225   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.964599   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.965263   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.164691   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.465011   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.465394   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.664729   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.964873   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.965102   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.164441   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.463869   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.464374   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.663232   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.964155   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.965729   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.164193   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.464826   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.465055   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.665062   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.964257   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.964718   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.164092   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.464412   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.465154   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.663870   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.964088   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.964244   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.163807   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.464528   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.464734   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.663862   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.964172   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.964724   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.163037   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.463909   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.464247   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.663991   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.964667   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.964814   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.165154   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.464490   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.465616   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.663455   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.964368   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.964629   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.163574   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.464542   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.464673   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.663843   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.964417   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.965017   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.164544   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.464044   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.464390   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.663701   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.964882   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.965265   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.164525   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.464073   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.465382   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.663913   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.965522   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.965918   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.163718   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.464374   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.464888   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.664659   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.964176   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.964568   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.163759   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.464637   20150 kapi.go:107] duration metric: took 27.003377641s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:57:27.466002   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.663874   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.965336   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.163248   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.465073   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.663967   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.964343   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.163137   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.464667   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.663554   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.965194   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.164069   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.464950   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.664040   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.965486   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.163804   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.465692   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.663374   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.964359   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.163317   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.465812   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.664076   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.965154   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.164707   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.465098   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.665134   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.964876   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.163914   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.466326   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.664758   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.965074   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.164860   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.464952   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.664662   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.964857   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.164475   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.465417   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.665414   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.964839   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.164055   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.464759   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.664153   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.964605   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.163148   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.465747   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.664245   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.964934   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.164537   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.549800   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.663883   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.965733   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.164177   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.465045   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.664158   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.964624   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.163080   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.464689   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.663790   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.964758   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.164079   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.465301   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.664307   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.965186   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.163944   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.465551   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.663837   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.964072   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.165810   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.465601   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.663305   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.964561   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.163360   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.465349   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.662939   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.964520   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.164021   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.465641   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.665455   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.965148   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.164326   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.464429   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.663286   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.964881   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.163752   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.465770   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.664249   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.964630   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.164176   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.464595   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.664342   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.965844   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.163998   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.465278   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.664321   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.965260   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.162892   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.465141   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.664371   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.964627   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.163739   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.464681   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.663815   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.964752   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.164265   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:53.465112   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.664395   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:53.964518   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.164203   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:54.465615   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.664383   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:54.964864   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.163799   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:55.465086   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.664440   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:55.965144   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.163924   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:56.465327   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.664825   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:56.964931   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.163733   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:57.465099   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.664809   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:57.965028   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.164390   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:58.465602   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.663583   20150 kapi.go:107] duration metric: took 57.003993376s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:58.964775   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.464638   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.965231   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.464763   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.964754   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.464656   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.965042   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:02.465391   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:02.965368   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:03.465589   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:03.964782   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:04.465506   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:04.965776   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:05.465064   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:05.991596   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:06.465860   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:06.964213   20150 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:07.468751   20150 kapi.go:107] duration metric: took 1m7.007491906s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 16:58:25.460816   20150 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:58:25.460842   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:25.960698   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:26.460712   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:26.960720   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:27.460770   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:27.960566   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:28.460632   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:28.960825   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:29.461201   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:29.960822   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:30.460697   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:30.960591   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:31.460672   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:31.960319   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:32.461267   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:32.961478   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:33.461724   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:33.960560   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:34.460286   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:34.961396   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:35.460632   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:35.961493   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:36.461625   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:36.960987   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:37.460708   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:37.960715   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:38.460457   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:38.961534   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:39.460288   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:39.961234   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:40.461294   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:40.961562   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:41.461542   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:41.960845   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:42.460303   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:42.961321   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:43.460963   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:43.960840   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:44.461054   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:44.961006   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:45.461576   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:45.961661   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:46.461348   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:46.961524   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:47.461129   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:47.961139   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:48.460976   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:48.961255   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:49.461037   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:49.960823   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:50.460459   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:50.961587   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:51.460793   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:51.961019   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:52.460960   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:52.960789   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:53.461165   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:53.960934   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:54.460679   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:54.961707   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:55.461035   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:55.961088   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:56.461124   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:56.961213   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:57.461433   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:57.961297   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:58.461350   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:58.960772   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:59.460489   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:59.961407   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:00.461644   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:00.960746   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:01.460904   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:01.960845   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:02.460532   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:02.961669   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:03.460431   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:03.961258   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:04.461052   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:04.960837   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:05.461682   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:05.960292   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:06.460848   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:06.961358   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:07.461308   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:07.961160   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:08.461092   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:08.960560   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:09.460661   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:09.960262   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:10.461635   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:10.961445   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:11.461670   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:11.960752   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:12.460567   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:12.961735   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:13.460526   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:13.961318   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:14.461176   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:14.961225   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:15.461642   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:15.961304   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:16.460998   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:16.961479   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:17.461207   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:17.960984   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:18.460859   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:18.960878   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:19.460647   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:19.961598   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:20.460825   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:20.961829   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:21.460850   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:21.960865   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:22.460324   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:22.961532   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:23.461267   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:23.961386   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:24.461102   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:24.960510   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:25.460640   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:25.960487   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:26.461363   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:26.961394   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:27.461017   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:27.960767   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:28.460609   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:28.961767   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:29.460663   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:29.960862   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:30.460835   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:30.960638   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:31.461822   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:31.960730   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:32.460621   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:32.960757   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:33.460391   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:33.961356   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:34.461563   20150 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:34.960617   20150 kapi.go:107] duration metric: took 2m31.502971388s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:59:34.962322   20150 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-163060 cluster.
	I0917 16:59:34.963591   20150 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:59:34.964898   20150 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:59:34.966347   20150 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, default-storageclass, cloud-spanner, volcano, helm-tiller, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 16:59:34.967591   20150 addons.go:510] duration metric: took 2m46.608002975s for enable addons: enabled=[nvidia-device-plugin storage-provisioner default-storageclass cloud-spanner volcano helm-tiller ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 16:59:34.967631   20150 start.go:246] waiting for cluster config update ...
	I0917 16:59:34.967655   20150 start.go:255] writing updated cluster config ...
	I0917 16:59:34.967917   20150 ssh_runner.go:195] Run: rm -f paused
	I0917 16:59:35.017291   20150 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:59:35.019095   20150 out.go:177] * Done! kubectl is now configured to use "addons-163060" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:09:12 addons-163060 dockerd[1334]: time="2024-09-17T17:09:12.979865475Z" level=info msg="ignoring event" container=159b1218758378d2a78304e06b1a9f9cda65a9a48611fda7139632437cad6002 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:13 addons-163060 dockerd[1334]: time="2024-09-17T17:09:13.023483710Z" level=info msg="ignoring event" container=bbe0a8b5000e709ac317055de4fd6f66ec767f9084b32c9534aa0e95e1c30015 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:13 addons-163060 dockerd[1334]: time="2024-09-17T17:09:13.060725626Z" level=info msg="ignoring event" container=a330378b13c4d8a8286de11b872dfcfd31f72c382c77b9191b078c2cee380d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:14 addons-163060 dockerd[1334]: time="2024-09-17T17:09:14.201872175Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=c60bf6b9218da6012ad5a301ae71750169dac8a9f5778f37adf7d0947d822e5b
	Sep 17 17:09:14 addons-163060 dockerd[1334]: time="2024-09-17T17:09:14.256313376Z" level=info msg="ignoring event" container=c60bf6b9218da6012ad5a301ae71750169dac8a9f5778f37adf7d0947d822e5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:14 addons-163060 dockerd[1334]: time="2024-09-17T17:09:14.376827278Z" level=info msg="ignoring event" container=0e95febb812b67559c1197d1284a23819910087faaf9c8294ea928215ee13cf5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:15 addons-163060 dockerd[1334]: time="2024-09-17T17:09:15.592676037Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=501703641d76c4630fd3aa356bfc4c081648e5902586e7031b92537dcba107a3
	Sep 17 17:09:15 addons-163060 dockerd[1334]: time="2024-09-17T17:09:15.613029862Z" level=info msg="ignoring event" container=501703641d76c4630fd3aa356bfc4c081648e5902586e7031b92537dcba107a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:15 addons-163060 dockerd[1334]: time="2024-09-17T17:09:15.748515162Z" level=info msg="ignoring event" container=ddfe6cdb3b5d0ad00423f7347a01b1020f3ba4cfd250e231e0c0fb37b800aeb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:18 addons-163060 dockerd[1334]: time="2024-09-17T17:09:18.975853275Z" level=info msg="ignoring event" container=7894e13279ca91814cdaf3e5682ba1390780bcd4f1a0856788bddb3b2d6959ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:18 addons-163060 dockerd[1334]: time="2024-09-17T17:09:18.975900206Z" level=info msg="ignoring event" container=a5ecb62068e2e8adfdc9b1853c1c10d1b70a0270a42009134763948a1ca1d78f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:19 addons-163060 dockerd[1334]: time="2024-09-17T17:09:19.142914763Z" level=info msg="ignoring event" container=724666754cb92e31aba0afac8138c8cb188158161e05c52a7edc3854418272f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:19 addons-163060 dockerd[1334]: time="2024-09-17T17:09:19.172527770Z" level=info msg="ignoring event" container=8416d5e6bce98625407693d614211d8958c9e5d6e13f60c967e3ea3069e47e6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:23 addons-163060 cri-dockerd[1599]: time="2024-09-17T17:09:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d282c8307976b4d203712fcd0ea263f9b1d1c72cb05b34225d4e93d26ec3b6a8/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 17:09:26 addons-163060 cri-dockerd[1599]: time="2024-09-17T17:09:26Z" level=info msg="Stop pulling image docker.io/alpine/helm:2.16.3: Status: Downloaded newer image for alpine/helm:2.16.3"
	Sep 17 17:09:26 addons-163060 dockerd[1334]: time="2024-09-17T17:09:26.308307043Z" level=info msg="ignoring event" container=426ff9e479ae24f94ec46f9c216265c31a0751314e417bba4df36744c118f5db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:26 addons-163060 dockerd[1334]: time="2024-09-17T17:09:26.320856652Z" level=warning msg="failed to close stdin: NotFound: task 426ff9e479ae24f94ec46f9c216265c31a0751314e417bba4df36744c118f5db not found: not found"
	Sep 17 17:09:28 addons-163060 dockerd[1334]: time="2024-09-17T17:09:28.118484130Z" level=info msg="ignoring event" container=d282c8307976b4d203712fcd0ea263f9b1d1c72cb05b34225d4e93d26ec3b6a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:28 addons-163060 dockerd[1334]: time="2024-09-17T17:09:28.231775378Z" level=info msg="ignoring event" container=be4db51af727a53a81ef4fbba1a26aac753218657cae7bee83736597c6fcab61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:28 addons-163060 dockerd[1334]: time="2024-09-17T17:09:28.749468631Z" level=info msg="ignoring event" container=60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:28 addons-163060 dockerd[1334]: time="2024-09-17T17:09:28.862531501Z" level=info msg="ignoring event" container=8169aec9acef397d81986c1acebc4dc6db221b7003eee2edc865c63c89116960 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:28 addons-163060 dockerd[1334]: time="2024-09-17T17:09:28.964166973Z" level=info msg="ignoring event" container=7dee2065d2703369f696c55170e5055085fe05d7ac3ddee2e865180ecd1bb543 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:29 addons-163060 dockerd[1334]: time="2024-09-17T17:09:29.006289989Z" level=info msg="ignoring event" container=7b882d97f55e803a99df705046e9441d4a0bcf524b38ad4055d314d951df4e82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:29 addons-163060 dockerd[1334]: time="2024-09-17T17:09:29.130776746Z" level=info msg="ignoring event" container=9c6e8e42c9d4621ab18d22411934a0afd5c6c26daf5c0d366f706adbd17f6ce5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:29 addons-163060 dockerd[1334]: time="2024-09-17T17:09:29.276544381Z" level=info msg="ignoring event" container=72c0d7bb79141e77f13b705788e2dd8074781bcb99c608c0c75840b47a870ff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e510f831d2fc       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  18 seconds ago      Running             hello-world-app           0                   e7653e88a50e5       hello-world-app-55bf9c44b4-g76gh
	d21c6a4f836e8       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                28 seconds ago      Running             nginx                     0                   7f7b5a80a1cff       nginx
	46f177b415583       a416a98b71e22                                                                                                                44 seconds ago      Exited              helper-pod                0                   aa8a938484c90       helper-pod-delete-pvc-6b40e24e-ff27-49e1-a0af-4a3320a2542e
	4b0d6e930c47e       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              47 seconds ago      Exited              busybox                   0                   29cd8dd943ecc       test-local-path
	8414040904fe8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   44dfe73f155c8       gcp-auth-89d5ffd79-hkv65
	26b380342ba16       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   a874b2ac10db7       ingress-nginx-admission-patch-9tgv9
	08c9200d91012       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   6f81e5d9b5303       ingress-nginx-admission-create-x4jwn
	9c6e8e42c9d46       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   72c0d7bb79141       registry-proxy-9ztsk
	8a3331eedf321       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   40046a4413d03       storage-provisioner
	2ecadef50c501       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   72c1b340f2c37       coredns-7c65d6cfc9-f8spg
	793685e5c743a       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   3559fb5338b2d       kube-proxy-9xj99
	831e94cfe3cec       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   1e33705e78b5c       kube-controller-manager-addons-163060
	4dc9603c31dc9       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   9fb26e25b0543       kube-apiserver-addons-163060
	70d4b9a9a5001       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   37bf23506c06a       kube-scheduler-addons-163060
	93ab0a3e97510       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   d856cda90c09f       etcd-addons-163060
	
	
	==> coredns [2ecadef50c50] <==
	[INFO] 10.244.0.7:50215 - 18049 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015971s
	[INFO] 10.244.0.7:55659 - 54909 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071428s
	[INFO] 10.244.0.7:55659 - 41599 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100948s
	[INFO] 10.244.0.7:37046 - 44366 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003810307s
	[INFO] 10.244.0.7:37046 - 44619 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.007955276s
	[INFO] 10.244.0.7:42126 - 32729 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005689677s
	[INFO] 10.244.0.7:42126 - 55975 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007592471s
	[INFO] 10.244.0.7:45729 - 47649 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004609095s
	[INFO] 10.244.0.7:45729 - 41532 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005486004s
	[INFO] 10.244.0.7:34008 - 60102 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005795s
	[INFO] 10.244.0.7:34008 - 2244 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116305s
	[INFO] 10.244.0.26:37576 - 31457 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000325169s
	[INFO] 10.244.0.26:34152 - 28088 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000427655s
	[INFO] 10.244.0.26:57272 - 35175 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097913s
	[INFO] 10.244.0.26:36498 - 2414 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131391s
	[INFO] 10.244.0.26:57583 - 3586 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082477s
	[INFO] 10.244.0.26:52040 - 10586 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127909s
	[INFO] 10.244.0.26:54644 - 21647 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007767534s
	[INFO] 10.244.0.26:46035 - 63590 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008630181s
	[INFO] 10.244.0.26:43306 - 16437 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007714913s
	[INFO] 10.244.0.26:56839 - 38082 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008665153s
	[INFO] 10.244.0.26:42970 - 61231 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006950152s
	[INFO] 10.244.0.26:34126 - 43699 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007022116s
	[INFO] 10.244.0.26:57758 - 49619 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003077072s
	[INFO] 10.244.0.26:57649 - 24889 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00320886s
	
	
	==> describe nodes <==
	Name:               addons-163060
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-163060
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-163060
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-163060
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-163060
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:09:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:09:19 +0000   Tue, 17 Sep 2024 16:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:09:19 +0000   Tue, 17 Sep 2024 16:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:09:19 +0000   Tue, 17 Sep 2024 16:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:09:19 +0000   Tue, 17 Sep 2024 16:56:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-163060
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6f018d39a5e4007b19afa3ae802fa60
	  System UUID:                0246bf5a-cd72-48db-a092-567c28e61886
	  Boot ID:                    72a5ac5e-36f4-46e5-9bdc-b96891ef9823
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-g76gh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  gcp-auth                    gcp-auth-89d5ffd79-hkv65                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-f8spg                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-163060                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-163060             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-163060    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9xj99                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-163060             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-163060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-163060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-163060 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-163060 event: Registered Node addons-163060 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 eb fd f2 7e f1 08 06
	[  +2.746902] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 9f 8c c1 dd 2b 08 06
	[  +2.611911] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 44 db 35 1d a3 08 06
	[  +6.069320] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 f5 5d e2 dc f7 08 06
	[  +0.244288] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 77 b5 45 be f9 08 06
	[  +0.019454] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 e4 d4 92 e1 7d 08 06
	[Sep17 16:58] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 c5 90 0d e1 85 08 06
	[Sep17 16:59] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 97 ce f8 5f 17 08 06
	[  +0.023828] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 36 1e ef 35 bf 08 06
	[ +27.507234] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff 3a 7e 88 27 60 c4 08 06
	[  +0.000576] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 1e c7 c9 38 61 08 06
	[Sep17 17:09] IPv4: martian source 10.244.0.35 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 c5 90 0d e1 85 08 06
	[ +17.308214] IPv4: martian source 10.244.0.1 from 10.244.0.38, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 3b 5f b3 33 e6 08 06
	
	
	==> etcd [93ab0a3e9751] <==
	{"level":"info","ts":"2024-09-17T16:56:38.247802Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T16:56:38.877302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:38.877344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:38.877365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:38.877390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:38.877400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:38.877416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:38.877430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:38.878378Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-163060 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T16:56:38.878392Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:38.878442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:38.878474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:38.878614Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:38.878652Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:38.879157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:38.879413Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:38.879444Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:38.879659Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:38.879662Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:38.880689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T16:56:38.880790Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T16:56:58.598768Z","caller":"traceutil/trace.go:171","msg":"trace[1057188115] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"132.474288ms","start":"2024-09-17T16:56:58.466274Z","end":"2024-09-17T16:56:58.598749Z","steps":["trace[1057188115] 'process raft request'  (duration: 111.016342ms)","trace[1057188115] 'compare'  (duration: 21.262115ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T17:06:38.898812Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1875}
	{"level":"info","ts":"2024-09-17T17:06:38.925702Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1875,"took":"26.299488ms","hash":2471208411,"current-db-size-bytes":9146368,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4919296,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-17T17:06:38.925746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2471208411,"revision":1875,"compact-revision":-1}
	
	
	==> gcp-auth [8414040904fe] <==
	2024/09/17 17:00:15 Ready to write response ...
	2024/09/17 17:08:28 Ready to marshal response ...
	2024/09/17 17:08:28 Ready to write response ...
	2024/09/17 17:08:33 Ready to marshal response ...
	2024/09/17 17:08:33 Ready to write response ...
	2024/09/17 17:08:34 Ready to marshal response ...
	2024/09/17 17:08:34 Ready to write response ...
	2024/09/17 17:08:34 Ready to marshal response ...
	2024/09/17 17:08:34 Ready to write response ...
	2024/09/17 17:08:40 Ready to marshal response ...
	2024/09/17 17:08:40 Ready to write response ...
	2024/09/17 17:08:40 Ready to marshal response ...
	2024/09/17 17:08:40 Ready to write response ...
	2024/09/17 17:08:40 Ready to marshal response ...
	2024/09/17 17:08:40 Ready to write response ...
	2024/09/17 17:08:44 Ready to marshal response ...
	2024/09/17 17:08:44 Ready to write response ...
	2024/09/17 17:08:57 Ready to marshal response ...
	2024/09/17 17:08:57 Ready to write response ...
	2024/09/17 17:09:02 Ready to marshal response ...
	2024/09/17 17:09:02 Ready to write response ...
	2024/09/17 17:09:09 Ready to marshal response ...
	2024/09/17 17:09:09 Ready to write response ...
	2024/09/17 17:09:23 Ready to marshal response ...
	2024/09/17 17:09:23 Ready to write response ...
	
	
	==> kernel <==
	 17:09:29 up 51 min,  0 users,  load average: 0.13, 0.30, 0.34
	Linux addons-163060 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4dc9603c31dc] <==
	W0917 17:00:07.951820       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 17:00:08.075079       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 17:00:08.461289       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 17:08:23.387730       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:08:24.404465       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 17:08:33.491569       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0917 17:08:40.595998       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.151.56"}
	I0917 17:08:41.315137       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 17:08:57.441514       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:08:57.659595       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.204.62"}
	E0917 17:09:00.981873       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 17:09:09.143474       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.19.10"}
	I0917 17:09:18.829020       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:18.829069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:18.841694       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:18.841751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:18.842623       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:18.842670       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:18.853749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:18.853800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:18.863193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:18.863234       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:09:19.843348       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:09:19.864241       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 17:09:19.873240       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [831e94cfe3ce] <==
	W0917 17:09:20.827664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:20.827701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:20.890655       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:20.890692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:20.997443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:20.997480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:09:21.341529       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0917 17:09:23.220170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:23.220211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:23.330268       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:23.330308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:23.336615       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:23.336649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:23.635908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:23.635942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:24.042630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:24.042696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:26.944453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:26.944498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:28.493896       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:28.493932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:28.513863       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:28.513914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:09:28.657051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="9.798µs"
	I0917 17:09:28.780831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.78µs"
	
	
	==> kube-proxy [793685e5c743] <==
	I0917 16:56:48.348771       1 server_linux.go:66] "Using iptables proxy"
	I0917 16:56:48.565320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 16:56:48.565396       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:48.766790       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 16:56:48.766855       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:48.772854       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:48.773143       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:48.773166       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:48.774683       1 config.go:199] "Starting service config controller"
	I0917 16:56:48.774700       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:48.774731       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:48.774736       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:48.774952       1 config.go:328] "Starting node config controller"
	I0917 16:56:48.775061       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:48.947174       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:48.947247       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:48.947633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [70d4b9a9a500] <==
	W0917 16:56:40.448764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:40.448781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.448891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:40.449624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.448891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:40.450747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.448913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 16:56:40.450789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.448977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:40.450818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.448989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:40.450856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:40.449019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 16:56:40.450882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:41.272631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:41.272668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:41.281764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:41.281795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:41.403776       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 16:56:41.403812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:41.413003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:41.413045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:41.551492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:41.551543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:41.866788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.302460    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bac1bb97-ad10-402c-bcc5-3417b7af8ed6-gcp-creds\") pod \"bac1bb97-ad10-402c-bcc5-3417b7af8ed6\" (UID: \"bac1bb97-ad10-402c-bcc5-3417b7af8ed6\") "
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.302492    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhs4\" (UniqueName: \"kubernetes.io/projected/5ad8a734-ffc8-4d65-8883-944f32116156-kube-api-access-wfhs4\") pod \"5ad8a734-ffc8-4d65-8883-944f32116156\" (UID: \"5ad8a734-ffc8-4d65-8883-944f32116156\") "
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.303102    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac1bb97-ad10-402c-bcc5-3417b7af8ed6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bac1bb97-ad10-402c-bcc5-3417b7af8ed6" (UID: "bac1bb97-ad10-402c-bcc5-3417b7af8ed6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.350296    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad8a734-ffc8-4d65-8883-944f32116156-kube-api-access-wfhs4" (OuterVolumeSpecName: "kube-api-access-wfhs4") pod "5ad8a734-ffc8-4d65-8883-944f32116156" (UID: "5ad8a734-ffc8-4d65-8883-944f32116156"). InnerVolumeSpecName "kube-api-access-wfhs4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.352561    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac1bb97-ad10-402c-bcc5-3417b7af8ed6-kube-api-access-4m7pf" (OuterVolumeSpecName: "kube-api-access-4m7pf") pod "bac1bb97-ad10-402c-bcc5-3417b7af8ed6" (UID: "bac1bb97-ad10-402c-bcc5-3417b7af8ed6"). InnerVolumeSpecName "kube-api-access-4m7pf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.403452    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4m7pf\" (UniqueName: \"kubernetes.io/projected/bac1bb97-ad10-402c-bcc5-3417b7af8ed6-kube-api-access-4m7pf\") on node \"addons-163060\" DevicePath \"\""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.403482    2458 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bac1bb97-ad10-402c-bcc5-3417b7af8ed6-gcp-creds\") on node \"addons-163060\" DevicePath \"\""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.403490    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wfhs4\" (UniqueName: \"kubernetes.io/projected/5ad8a734-ffc8-4d65-8883-944f32116156-kube-api-access-wfhs4\") on node \"addons-163060\" DevicePath \"\""
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.462463    2458 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9ztsk" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:09:28 addons-163060 kubelet[2458]: I0917 17:09:28.471941    2458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad8a734-ffc8-4d65-8883-944f32116156" path="/var/lib/kubelet/pods/5ad8a734-ffc8-4d65-8883-944f32116156/volumes"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.027504    2458 scope.go:117] "RemoveContainer" containerID="426ff9e479ae24f94ec46f9c216265c31a0751314e417bba4df36744c118f5db"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.042695    2458 scope.go:117] "RemoveContainer" containerID="8169aec9acef397d81986c1acebc4dc6db221b7003eee2edc865c63c89116960"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.064544    2458 scope.go:117] "RemoveContainer" containerID="60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.082749    2458 scope.go:117] "RemoveContainer" containerID="60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: E0917 17:09:29.083626    2458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2" containerID="60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.083668    2458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2"} err="failed to get container status \"60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 60e2ac8735b6f8db5223cfacbbefed38710c9c05ecf7486b442742195fb409a2"
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.149980    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scps5\" (UniqueName: \"kubernetes.io/projected/87171e43-6b56-423a-ac20-6b46a3583197-kube-api-access-scps5\") pod \"87171e43-6b56-423a-ac20-6b46a3583197\" (UID: \"87171e43-6b56-423a-ac20-6b46a3583197\") "
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.150045    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/6de636f4-5713-4439-9d76-756777a66ef2-kube-api-access-nsf2k\") pod \"6de636f4-5713-4439-9d76-756777a66ef2\" (UID: \"6de636f4-5713-4439-9d76-756777a66ef2\") "
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.151866    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87171e43-6b56-423a-ac20-6b46a3583197-kube-api-access-scps5" (OuterVolumeSpecName: "kube-api-access-scps5") pod "87171e43-6b56-423a-ac20-6b46a3583197" (UID: "87171e43-6b56-423a-ac20-6b46a3583197"). InnerVolumeSpecName "kube-api-access-scps5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.152032    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6de636f4-5713-4439-9d76-756777a66ef2-kube-api-access-nsf2k" (OuterVolumeSpecName: "kube-api-access-nsf2k") pod "6de636f4-5713-4439-9d76-756777a66ef2" (UID: "6de636f4-5713-4439-9d76-756777a66ef2"). InnerVolumeSpecName "kube-api-access-nsf2k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.250560    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/6de636f4-5713-4439-9d76-756777a66ef2-kube-api-access-nsf2k\") on node \"addons-163060\" DevicePath \"\""
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.250601    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-scps5\" (UniqueName: \"kubernetes.io/projected/87171e43-6b56-423a-ac20-6b46a3583197-kube-api-access-scps5\") on node \"addons-163060\" DevicePath \"\""
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.452348    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljhb9\" (UniqueName: \"kubernetes.io/projected/de43c7a6-1992-4444-969d-d41949e06cdb-kube-api-access-ljhb9\") pod \"de43c7a6-1992-4444-969d-d41949e06cdb\" (UID: \"de43c7a6-1992-4444-969d-d41949e06cdb\") "
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.454087    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de43c7a6-1992-4444-969d-d41949e06cdb-kube-api-access-ljhb9" (OuterVolumeSpecName: "kube-api-access-ljhb9") pod "de43c7a6-1992-4444-969d-d41949e06cdb" (UID: "de43c7a6-1992-4444-969d-d41949e06cdb"). InnerVolumeSpecName "kube-api-access-ljhb9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:29 addons-163060 kubelet[2458]: I0917 17:09:29.552867    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ljhb9\" (UniqueName: \"kubernetes.io/projected/de43c7a6-1992-4444-969d-d41949e06cdb-kube-api-access-ljhb9\") on node \"addons-163060\" DevicePath \"\""
	
	
	==> storage-provisioner [8a3331eedf32] <==
	I0917 16:56:55.870387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:55.960966       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:55.961016       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:56.049106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:56.049320       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-163060_546f4f54-f306-4def-a157-3e9073616e90!
	I0917 16:56:56.052387       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12e461c2-dc4f-4e57-8763-ba4b004ba039", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-163060_546f4f54-f306-4def-a157-3e9073616e90 became leader
	I0917 16:56:56.251069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-163060_546f4f54-f306-4def-a157-3e9073616e90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-163060 -n addons-163060
helpers_test.go:261: (dbg) Run:  kubectl --context addons-163060 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-163060 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-163060 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-163060/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 17:00:15 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qb4wj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qb4wj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-163060
	  Normal   Pulling    7m47s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.63s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 11.24
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.72
22 TestOffline 79.2
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 209.64
29 TestAddons/serial/Volcano 40.53
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 21.18
35 TestAddons/parallel/InspektorGadget 10.65
36 TestAddons/parallel/MetricsServer 5.73
37 TestAddons/parallel/HelmTiller 10.41
39 TestAddons/parallel/CSI 61.1
40 TestAddons/parallel/Headlamp 17.22
41 TestAddons/parallel/CloudSpanner 5.44
42 TestAddons/parallel/LocalPath 54.36
43 TestAddons/parallel/NvidiaDevicePlugin 5.41
44 TestAddons/parallel/Yakd 10.83
45 TestAddons/StoppedEnableDisable 5.87
46 TestCertOptions 31.64
47 TestCertExpiration 229.74
48 TestDockerFlags 29.84
49 TestForceSystemdFlag 35.23
50 TestForceSystemdEnv 23.67
52 TestKVMDriverInstallOrUpdate 5.58
56 TestErrorSpam/setup 20.91
57 TestErrorSpam/start 0.55
58 TestErrorSpam/status 0.84
59 TestErrorSpam/pause 1.16
60 TestErrorSpam/unpause 1.36
61 TestErrorSpam/stop 10.78
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 30.56
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.92
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.4
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 38.44
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 0.92
84 TestFunctional/serial/LogsFileCmd 0.94
85 TestFunctional/serial/InvalidService 4.51
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 11.57
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.24
91 TestFunctional/parallel/StatusCmd 1.15
95 TestFunctional/parallel/ServiceCmdConnect 8.77
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 36.48
99 TestFunctional/parallel/SSHCmd 0.51
100 TestFunctional/parallel/CpCmd 1.92
101 TestFunctional/parallel/MySQL 23.52
102 TestFunctional/parallel/FileSync 0.31
103 TestFunctional/parallel/CertSync 1.87
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
111 TestFunctional/parallel/License 0.73
112 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
114 TestFunctional/parallel/ProfileCmd/profile_list 0.5
115 TestFunctional/parallel/MountCmd/any-port 9.36
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
117 TestFunctional/parallel/DockerEnv/bash 0.87
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
121 TestFunctional/parallel/MountCmd/specific-port 2.07
122 TestFunctional/parallel/ServiceCmd/List 0.54
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
125 TestFunctional/parallel/ServiceCmd/Format 0.34
126 TestFunctional/parallel/ServiceCmd/URL 0.41
127 TestFunctional/parallel/MountCmd/VerifyCleanup 0.95
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.25
133 TestFunctional/parallel/Version/short 0.04
134 TestFunctional/parallel/Version/components 0.43
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
139 TestFunctional/parallel/ImageCommands/ImageBuild 4.38
140 TestFunctional/parallel/ImageCommands/Setup 1.95
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.76
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.54
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 98.79
161 TestMultiControlPlane/serial/DeployApp 6.07
162 TestMultiControlPlane/serial/PingHostFromPods 1
163 TestMultiControlPlane/serial/AddWorkerNode 22.92
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
166 TestMultiControlPlane/serial/CopyFile 15.57
167 TestMultiControlPlane/serial/StopSecondaryNode 11.34
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
169 TestMultiControlPlane/serial/RestartSecondaryNode 35.62
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.19
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.15
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.26
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.46
174 TestMultiControlPlane/serial/StopCluster 32.38
175 TestMultiControlPlane/serial/RestartCluster 79.66
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
177 TestMultiControlPlane/serial/AddSecondaryNode 37.91
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.64
181 TestImageBuild/serial/Setup 21.79
182 TestImageBuild/serial/NormalBuild 2.57
183 TestImageBuild/serial/BuildWithBuildArg 0.96
184 TestImageBuild/serial/BuildWithDockerIgnore 0.78
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
189 TestJSONOutput/start/Command 36.45
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.52
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.4
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.75
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 23.7
215 TestKicCustomNetwork/use_default_bridge_network 25.55
216 TestKicExistingNetwork 24.69
217 TestKicCustomSubnet 23.33
218 TestKicStaticIP 22.96
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 49.35
223 TestMountStart/serial/StartWithMountFirst 7.16
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 7.18
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.43
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.36
231 TestMountStart/serial/VerifyMountPostStop 0.24
234 TestMultiNode/serial/FreshStart2Nodes 57.64
235 TestMultiNode/serial/DeployApp2Nodes 41.03
236 TestMultiNode/serial/PingHostFrom2Pods 0.71
237 TestMultiNode/serial/AddNode 14.8
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.44
240 TestMultiNode/serial/CopyFile 8.83
241 TestMultiNode/serial/StopNode 2.08
242 TestMultiNode/serial/StartAfterStop 9.67
243 TestMultiNode/serial/RestartKeepsNodes 93.35
244 TestMultiNode/serial/DeleteNode 5.15
245 TestMultiNode/serial/StopMultiNode 21.25
246 TestMultiNode/serial/RestartMultiNode 48.83
247 TestMultiNode/serial/ValidateNameConflict 23.65
252 TestPreload 104.87
254 TestScheduledStopUnix 94.38
255 TestSkaffold 103.16
257 TestInsufficientStorage 12.55
258 TestRunningBinaryUpgrade 81.17
260 TestKubernetesUpgrade 333.27
261 TestMissingContainerUpgrade 181.96
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 32.22
265 TestNoKubernetes/serial/StartWithStopK8s 17.29
266 TestNoKubernetes/serial/Start 6.76
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
268 TestNoKubernetes/serial/ProfileList 6.31
269 TestNoKubernetes/serial/Stop 1.23
270 TestNoKubernetes/serial/StartNoArgs 9.73
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
283 TestStoppedBinaryUpgrade/Setup 2.4
284 TestStoppedBinaryUpgrade/Upgrade 175.46
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
294 TestPause/serial/Start 39.39
295 TestNetworkPlugins/group/auto/Start 82.58
296 TestPause/serial/SecondStartNoReconfiguration 31.91
297 TestPause/serial/Pause 0.5
298 TestPause/serial/VerifyStatus 0.29
299 TestPause/serial/Unpause 0.47
300 TestPause/serial/PauseAgain 0.62
301 TestPause/serial/DeletePaused 2.2
302 TestPause/serial/VerifyDeletedResources 14.81
303 TestNetworkPlugins/group/kindnet/Start 56.53
304 TestNetworkPlugins/group/auto/KubeletFlags 0.28
305 TestNetworkPlugins/group/auto/NetCatPod 8.24
306 TestNetworkPlugins/group/calico/Start 61.37
307 TestNetworkPlugins/group/auto/DNS 0.14
308 TestNetworkPlugins/group/auto/Localhost 0.12
309 TestNetworkPlugins/group/auto/HairPin 0.1
310 TestNetworkPlugins/group/custom-flannel/Start 46.69
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
313 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
314 TestNetworkPlugins/group/kindnet/DNS 0.14
315 TestNetworkPlugins/group/kindnet/Localhost 0.12
316 TestNetworkPlugins/group/kindnet/HairPin 0.12
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.35
319 TestNetworkPlugins/group/calico/NetCatPod 10.24
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
322 TestNetworkPlugins/group/false/Start 36.12
323 TestNetworkPlugins/group/calico/DNS 0.14
324 TestNetworkPlugins/group/calico/Localhost 0.13
325 TestNetworkPlugins/group/calico/HairPin 0.13
326 TestNetworkPlugins/group/custom-flannel/DNS 0.18
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
329 TestNetworkPlugins/group/enable-default-cni/Start 40.54
330 TestNetworkPlugins/group/flannel/Start 46.19
331 TestNetworkPlugins/group/bridge/Start 64.67
332 TestNetworkPlugins/group/false/KubeletFlags 0.31
333 TestNetworkPlugins/group/false/NetCatPod 10.22
334 TestNetworkPlugins/group/false/DNS 26.3
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/false/Localhost 0.11
342 TestNetworkPlugins/group/false/HairPin 0.12
343 TestNetworkPlugins/group/kubenet/Start 37.2
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
345 TestNetworkPlugins/group/flannel/NetCatPod 12.18
346 TestNetworkPlugins/group/flannel/DNS 0.14
347 TestNetworkPlugins/group/flannel/Localhost 0.12
348 TestNetworkPlugins/group/flannel/HairPin 0.13
350 TestStartStop/group/old-k8s-version/serial/FirstStart 135.76
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
352 TestNetworkPlugins/group/bridge/NetCatPod 11.24
353 TestNetworkPlugins/group/bridge/DNS 0.24
354 TestNetworkPlugins/group/bridge/Localhost 0.15
355 TestNetworkPlugins/group/bridge/HairPin 0.17
357 TestStartStop/group/no-preload/serial/FirstStart 73.94
358 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
359 TestNetworkPlugins/group/kubenet/NetCatPod 9.36
360 TestNetworkPlugins/group/kubenet/DNS 0.21
361 TestNetworkPlugins/group/kubenet/Localhost 0.13
362 TestNetworkPlugins/group/kubenet/HairPin 0.13
364 TestStartStop/group/embed-certs/serial/FirstStart 36.98
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.91
367 TestStartStop/group/embed-certs/serial/DeployApp 10.27
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
369 TestStartStop/group/embed-certs/serial/Stop 10.68
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
371 TestStartStop/group/no-preload/serial/DeployApp 9.29
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
373 TestStartStop/group/embed-certs/serial/SecondStart 305.72
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
375 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
376 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.79
377 TestStartStop/group/no-preload/serial/Stop 10.78
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.91
380 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
381 TestStartStop/group/no-preload/serial/SecondStart 263.15
382 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
383 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.78
384 TestStartStop/group/old-k8s-version/serial/Stop 10.78
385 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
386 TestStartStop/group/old-k8s-version/serial/SecondStart 130.97
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/old-k8s-version/serial/Pause 2.33
392 TestStartStop/group/newest-cni/serial/FirstStart 31.25
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
395 TestStartStop/group/newest-cni/serial/Stop 10.79
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
397 TestStartStop/group/newest-cni/serial/SecondStart 14.69
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
401 TestStartStop/group/newest-cni/serial/Pause 2.55
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
405 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
406 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
407 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
408 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
409 TestStartStop/group/no-preload/serial/Pause 2.62
410 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/embed-certs/serial/Pause 2.33
x
+
TestDownloadOnly/v1.20.0/json-events (18.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-535365 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-535365 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.212600258s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-535365
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-535365: exit status 85 (55.757732ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-535365 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |          |
	|         | -p download-only-535365        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:33.264486   18790 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:33.264755   18790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:33.264765   18790 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:33.264772   18790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:33.264963   18790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	W0917 16:55:33.265127   18790 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19662-12004/.minikube/config/config.json: open /home/jenkins/minikube-integration/19662-12004/.minikube/config/config.json: no such file or directory
	I0917 16:55:33.265697   18790 out.go:352] Setting JSON to true
	I0917 16:55:33.266564   18790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2274,"bootTime":1726589859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:33.266625   18790 start.go:139] virtualization: kvm guest
	I0917 16:55:33.268806   18790 out.go:97] [download-only-535365] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 16:55:33.268906   18790 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:55:33.268940   18790 notify.go:220] Checking for updates...
	I0917 16:55:33.269995   18790 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:33.271311   18790 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:33.272662   18790 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 16:55:33.273925   18790 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	I0917 16:55:33.275222   18790 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 16:55:33.277447   18790 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:33.277680   18790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:33.298249   18790 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:55:33.298320   18790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:33.664349   18790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 16:55:33.655486105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:55:33.664457   18790 docker.go:318] overlay module found
	I0917 16:55:33.666145   18790 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:33.666168   18790 start.go:297] selected driver: docker
	I0917 16:55:33.666178   18790 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:33.666272   18790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:33.712445   18790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 16:55:33.704260821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:55:33.712638   18790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:33.713152   18790 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 16:55:33.713317   18790 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:33.715388   18790 out.go:169] Using Docker driver with root privileges
	I0917 16:55:33.716739   18790 cni.go:84] Creating CNI manager for ""
	I0917 16:55:33.716806   18790 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 16:55:33.716887   18790 start.go:340] cluster config:
	{Name:download-only-535365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-535365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:33.718309   18790 out.go:97] Starting "download-only-535365" primary control-plane node in "download-only-535365" cluster
	I0917 16:55:33.718328   18790 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:33.719758   18790 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:33.719784   18790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:33.719891   18790 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:33.735503   18790 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:33.735700   18790 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:33.735816   18790 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:33.834698   18790 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 16:55:33.834726   18790 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:33.834867   18790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:33.836688   18790 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 16:55:33.836700   18790 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:33.950080   18790 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 16:55:45.884934   18790 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:45.885035   18790 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:46.693730   18790 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 16:55:46.694086   18790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/download-only-535365/config.json ...
	I0917 16:55:46.694124   18790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/download-only-535365/config.json: {Name:mk0369323bb503695b2e5f4a49bf4c8e96bdd95b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:46.694322   18790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:46.694531   18790 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19662-12004/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-535365 host does not exist
	  To start a cluster, run: "minikube start -p download-only-535365"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-535365
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-265366 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-265366 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.243373699s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-265366
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-265366: exit status 85 (55.853959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-535365 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-535365        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-535365        | download-only-535365 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-265366 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-265366        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:51.853859   19185 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:51.853977   19185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.853985   19185 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:51.853990   19185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.854155   19185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 16:55:51.854693   19185 out.go:352] Setting JSON to true
	I0917 16:55:51.855534   19185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2293,"bootTime":1726589859,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:51.855631   19185 start.go:139] virtualization: kvm guest
	I0917 16:55:51.857765   19185 out.go:97] [download-only-265366] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:51.857900   19185 notify.go:220] Checking for updates...
	I0917 16:55:51.859196   19185 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:51.860544   19185 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:51.861914   19185 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 16:55:51.863176   19185 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	I0917 16:55:51.864552   19185 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 16:55:51.867245   19185 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:51.867533   19185 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:51.888936   19185 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:55:51.889053   19185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:51.934858   19185 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 16:55:51.926268923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:55:51.934983   19185 docker.go:318] overlay module found
	I0917 16:55:51.936819   19185 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:51.936839   19185 start.go:297] selected driver: docker
	I0917 16:55:51.936845   19185 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:51.936919   19185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:51.982326   19185 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 16:55:51.974255295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 16:55:51.982528   19185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:51.983252   19185 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 16:55:51.983448   19185 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:51.985107   19185 out.go:169] Using Docker driver with root privileges
	I0917 16:55:51.986162   19185 cni.go:84] Creating CNI manager for ""
	I0917 16:55:51.986232   19185 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:55:51.986248   19185 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:51.986320   19185 start.go:340] cluster config:
	{Name:download-only-265366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-265366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:51.987654   19185 out.go:97] Starting "download-only-265366" primary control-plane node in "download-only-265366" cluster
	I0917 16:55:51.987669   19185 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:51.988714   19185 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:51.988731   19185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:51.988827   19185 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:52.004310   19185 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:52.004426   19185 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:52.004455   19185 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:55:52.004465   19185 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:55:52.004476   19185 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:55:52.187572   19185 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 16:55:52.187604   19185 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:52.187787   19185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:52.189700   19185 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 16:55:52.189717   19185 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:52.310735   19185 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 16:56:01.315022   19185 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:56:01.315118   19185 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-12004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-265366 host does not exist
	  To start a cluster, run: "minikube start -p download-only-265366"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-265366
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-967799 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-967799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-967799
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-225195 --alsologtostderr --binary-mirror http://127.0.0.1:45015 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-225195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-225195
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (79.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-542235 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-542235 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m17.065453149s)
helpers_test.go:175: Cleaning up "offline-docker-542235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-542235
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-542235: (2.130382835s)
--- PASS: TestOffline (79.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-163060
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-163060: exit status 85 (48.390256ms)

                                                
                                                
-- stdout --
	* Profile "addons-163060" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-163060"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-163060
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-163060: exit status 85 (49.319298ms)

                                                
                                                
-- stdout --
	* Profile "addons-163060" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-163060"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-163060 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-163060 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m29.63591691s)
--- PASS: TestAddons/Setup (209.64s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.53s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 12.4344ms
addons_test.go:905: volcano-admission stabilized in 12.480425ms
addons_test.go:913: volcano-controller stabilized in 12.53447ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-qkrzj" [bcb774aa-d3b9-4948-8be3-090ad6fa07e2] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003823551s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lhh87" [32fcf3c4-9d99-4a78-8e7e-fe71d41e1396] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003164404s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-g4z86" [4a7eb008-5233-46a1-a986-f5b082650b74] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003193102s
addons_test.go:932: (dbg) Run:  kubectl --context addons-163060 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-163060 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-163060 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [60a4bc38-24f6-467b-bcf2-0b8260ec35a7] Pending
helpers_test.go:344: "test-job-nginx-0" [60a4bc38-24f6-467b-bcf2-0b8260ec35a7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [60a4bc38-24f6-467b-bcf2-0b8260ec35a7] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003727312s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable volcano --alsologtostderr -v=1: (10.187422384s)
--- PASS: TestAddons/serial/Volcano (40.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-163060 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-163060 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-163060 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-163060 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-163060 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0053c87b-c593-4f1d-bba3-3d73f5ae3004] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0053c87b-c593-4f1d-bba3-3d73f5ae3004] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003324452s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-163060 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable ingress-dns --alsologtostderr -v=1: (1.332559093s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable ingress --alsologtostderr -v=1: (7.689181418s)
--- PASS: TestAddons/parallel/Ingress (21.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q6mtg" [6950fe67-3f5b-4318-9475-387de03fa42b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003798963s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-163060
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-163060: (5.649465578s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.989743ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2f2f2" [03a25efb-5c8d-4637-b228-6bb67ccb601f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00242876s
addons_test.go:417: (dbg) Run:  kubectl --context addons-163060 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.024093ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-qd92x" [6de636f4-5713-4439-9d76-756777a66ef2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003317665s
addons_test.go:475: (dbg) Run:  kubectl --context addons-163060 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-163060 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.83071224s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable helm-tiller --alsologtostderr -v=1
2024/09/17 17:09:28 [DEBUG] GET http://192.168.49.2:5000
--- PASS: TestAddons/parallel/HelmTiller (10.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.14823ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-163060 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-163060 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5d08c16a-07bd-4d97-8fff-ad8bdf82d3cd] Pending
helpers_test.go:344: "task-pv-pod" [5d08c16a-07bd-4d97-8fff-ad8bdf82d3cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5d08c16a-07bd-4d97-8fff-ad8bdf82d3cd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00286975s
addons_test.go:590: (dbg) Run:  kubectl --context addons-163060 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-163060 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-163060 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-163060 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-163060 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-163060 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-163060 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5a7bebcb-252e-4216-a3ff-252d25290d0c] Pending
helpers_test.go:344: "task-pv-pod-restore" [5a7bebcb-252e-4216-a3ff-252d25290d0c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5a7bebcb-252e-4216-a3ff-252d25290d0c] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003598796s
addons_test.go:632: (dbg) Run:  kubectl --context addons-163060 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-163060 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-163060 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.436899475s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-163060 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-95g8w" [68e7fd20-a9c6-4ef3-b42a-f02d8a29fefd] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-95g8w" [68e7fd20-a9c6-4ef3-b42a-f02d8a29fefd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-95g8w" [68e7fd20-a9c6-4ef3-b42a-f02d8a29fefd] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004211251s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable headlamp --alsologtostderr -v=1: (5.55302193s)
--- PASS: TestAddons/parallel/Headlamp (17.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hqpsm" [7b2a7c29-9e23-43a8-86f9-9bbd9a9c6f7c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002827008s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-163060
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-163060 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-163060 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f397b47c-294f-4fda-af0c-8071da7b56c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f397b47c-294f-4fda-af0c-8071da7b56c7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f397b47c-294f-4fda-af0c-8071da7b56c7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003152041s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-163060 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 ssh "cat /opt/local-path-provisioner/pvc-6b40e24e-ff27-49e1-a0af-4a3320a2542e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-163060 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-163060 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.340603902s)
--- PASS: TestAddons/parallel/LocalPath (54.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fvg2d" [69980d79-6040-46a6-92e4-f154f528e261] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003955248s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-163060
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lnxh5" [2b270b7a-7c43-41fc-9f94-f5b231caac39] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.032382128s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-163060 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-163060 addons disable yakd --alsologtostderr -v=1: (5.795080829s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.87s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-163060
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-163060: (5.639034008s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-163060
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-163060
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-163060
--- PASS: TestAddons/StoppedEnableDisable (5.87s)

                                                
                                    
x
+
TestCertOptions (31.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-611495 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-611495 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.938015058s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-611495 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-611495 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-611495 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-611495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-611495
E0917 17:39:35.031861   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-611495: (2.041593788s)
--- PASS: TestCertOptions (31.64s)

                                                
                                    
x
+
TestCertExpiration (229.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-959431 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-959431 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.256706712s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-959431 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-959431 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.48387291s)
helpers_test.go:175: Cleaning up "cert-expiration-959431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-959431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-959431: (3.994134466s)
--- PASS: TestCertExpiration (229.74s)

                                                
                                    
x
+
TestDockerFlags (29.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-395926 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-395926 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.894591474s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-395926 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-395926 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-395926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-395926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-395926: (2.157277947s)
--- PASS: TestDockerFlags (29.84s)

                                                
                                    
x
+
TestForceSystemdFlag (35.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-687459 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-687459 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.350793123s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-687459 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-687459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-687459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-687459: (2.417921676s)
--- PASS: TestForceSystemdFlag (35.23s)

                                                
                                    
x
+
TestForceSystemdEnv (23.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-494723 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-494723 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.301745721s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-494723 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-494723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-494723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-494723: (2.072868308s)
--- PASS: TestForceSystemdEnv (23.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.58s)

                                                
                                    
x
+
TestErrorSpam/setup (20.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-995964 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-995964 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-995964 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-995964 --driver=docker  --container-runtime=docker: (20.912618415s)
--- PASS: TestErrorSpam/setup (20.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (10.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 stop: (10.612795476s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-995964 --log_dir /tmp/nospam-995964 stop
--- PASS: TestErrorSpam/stop (10.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19662-12004/.minikube/files/etc/test/nested/copy/18778/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-772451 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (30.560079026s)
--- PASS: TestFunctional/serial/StartWithProxy (30.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-772451 --alsologtostderr -v=8: (27.918671724s)
functional_test.go:663: soft start took 27.919452348s for "functional-772451" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-772451 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-772451 /tmp/TestFunctionalserialCacheCmdcacheadd_local3106974197/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache add minikube-local-cache-test:functional-772451
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-772451 cache add minikube-local-cache-test:functional-772451: (1.125751492s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache delete minikube-local-cache-test:functional-772451
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-772451
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (253.364537ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 kubectl -- --context functional-772451 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-772451 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-772451 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.443084352s)
functional_test.go:761: restart took 38.443210095s for "functional-772451" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-772451 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 logs --file /tmp/TestFunctionalserialLogsFileCmd4200193397/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-772451 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-772451
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-772451: exit status 115 (317.546182ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32251 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-772451 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-772451 delete -f testdata/invalidsvc.yaml: (1.019550402s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 config get cpus: exit status 14 (71.633404ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 config get cpus: exit status 14 (46.098159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-772451 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-772451 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 69953: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-772451 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.100086ms)

                                                
                                                
-- stdout --
	* [functional-772451] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:12:07.701723   68632 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:12:07.701978   68632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:12:07.701989   68632 out.go:358] Setting ErrFile to fd 2...
	I0917 17:12:07.701996   68632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:12:07.702351   68632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:12:07.702946   68632 out.go:352] Setting JSON to false
	I0917 17:12:07.704111   68632 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3269,"bootTime":1726589859,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:12:07.704207   68632 start.go:139] virtualization: kvm guest
	I0917 17:12:07.706083   68632 out.go:177] * [functional-772451] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:12:07.707552   68632 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:12:07.707639   68632 notify.go:220] Checking for updates...
	I0917 17:12:07.710090   68632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:12:07.711436   68632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 17:12:07.712717   68632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	I0917 17:12:07.713961   68632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:12:07.715147   68632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:12:07.716736   68632 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:12:07.717183   68632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:12:07.742758   68632 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:12:07.742834   68632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:12:07.802157   68632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 17:12:07.790564443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 17:12:07.802253   68632 docker.go:318] overlay module found
	I0917 17:12:07.804603   68632 out.go:177] * Using the docker driver based on existing profile
	I0917 17:12:07.806251   68632 start.go:297] selected driver: docker
	I0917 17:12:07.806269   68632 start.go:901] validating driver "docker" against &{Name:functional-772451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-772451 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:12:07.806369   68632 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:12:07.808706   68632 out.go:201] 
	W0917 17:12:07.810515   68632 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:12:07.811872   68632 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-772451 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-772451 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (236.324599ms)

                                                
                                                
-- stdout --
	* [functional-772451] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:12:07.496606   68427 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:12:07.496776   68427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:12:07.496790   68427 out.go:358] Setting ErrFile to fd 2...
	I0917 17:12:07.496803   68427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:12:07.497301   68427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:12:07.498076   68427 out.go:352] Setting JSON to false
	I0917 17:12:07.499707   68427 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3268,"bootTime":1726589859,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:12:07.499836   68427 start.go:139] virtualization: kvm guest
	I0917 17:12:07.502900   68427 out.go:177] * [functional-772451] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 17:12:07.504458   68427 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:12:07.504458   68427 notify.go:220] Checking for updates...
	I0917 17:12:07.507681   68427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:12:07.509257   68427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	I0917 17:12:07.516377   68427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	I0917 17:12:07.518029   68427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:12:07.519647   68427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:12:07.522118   68427 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:12:07.523370   68427 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:12:07.560971   68427 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:12:07.561071   68427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:12:07.637869   68427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 17:12:07.623774671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 17:12:07.638003   68427 docker.go:318] overlay module found
	I0917 17:12:07.641052   68427 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 17:12:07.642635   68427 start.go:297] selected driver: docker
	I0917 17:12:07.642664   68427 start.go:901] validating driver "docker" against &{Name:functional-772451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-772451 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:12:07.642786   68427 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:12:07.645371   68427 out.go:201] 
	W0917 17:12:07.646662   68427 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:12:07.648008   68427 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-772451 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-772451 expose deployment hello-node-connect --type=NodePort --port=8080
2024/09/17 17:12:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-l62gr" [d16a2176-fd2a-4834-872b-0f51ddbd41a3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-l62gr" [d16a2176-fd2a-4834-872b-0f51ddbd41a3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003465068s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31926
functional_test.go:1675: http://192.168.49.2:31926: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-l62gr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31926
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e02852dd-ff90-4689-bbd1-d29eefd29967] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.024182643s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-772451 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-772451 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-772451 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-772451 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e699be91-74e7-46a4-9454-6f8e36941ad5] Pending
helpers_test.go:344: "sp-pod" [e699be91-74e7-46a4-9454-6f8e36941ad5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e699be91-74e7-46a4-9454-6f8e36941ad5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003782529s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-772451 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-772451 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-772451 delete -f testdata/storage-provisioner/pod.yaml: (1.566501212s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-772451 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aa7ba55a-c115-4dac-8f6a-e98087d6655a] Pending
helpers_test.go:344: "sp-pod" [aa7ba55a-c115-4dac-8f6a-e98087d6655a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aa7ba55a-c115-4dac-8f6a-e98087d6655a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003296873s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-772451 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh -n functional-772451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cp functional-772451:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1891448504/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh -n functional-772451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh -n functional-772451 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-772451 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-4x9rf" [88651bd6-cc23-44c1-9beb-db02e11b10a4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-4x9rf" [88651bd6-cc23-44c1-9beb-db02e11b10a4] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004170486s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-772451 exec mysql-6cdb49bbb-4x9rf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-772451 exec mysql-6cdb49bbb-4x9rf -- mysql -ppassword -e "show databases;": exit status 1 (127.151902ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-772451 exec mysql-6cdb49bbb-4x9rf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-772451 exec mysql-6cdb49bbb-4x9rf -- mysql -ppassword -e "show databases;": exit status 1 (109.967887ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-772451 exec mysql-6cdb49bbb-4x9rf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18778/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /etc/test/nested/copy/18778/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18778.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /etc/ssl/certs/18778.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18778.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /usr/share/ca-certificates/18778.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/187782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /etc/ssl/certs/187782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/187782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /usr/share/ca-certificates/187782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-772451 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh "sudo systemctl is-active crio": exit status 1 (250.484511ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-772451 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-772451 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ztvtv" [db432674-e98b-4fd4-9091-afcdc98ab802] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ztvtv" [db432674-e98b-4fd4-9091-afcdc98ab802] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004278002s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "436.643181ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.712299ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdany-port1774930597/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726593126232406237" to /tmp/TestFunctionalparallelMountCmdany-port1774930597/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726593126232406237" to /tmp/TestFunctionalparallelMountCmdany-port1774930597/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726593126232406237" to /tmp/TestFunctionalparallelMountCmdany-port1774930597/001/test-1726593126232406237
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (427.331932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:12 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:12 test-1726593126232406237
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh cat /mount-9p/test-1726593126232406237
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-772451 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f3e88cb1-f360-4b0d-8064-56e886aa09d5] Pending
helpers_test.go:344: "busybox-mount" [f3e88cb1-f360-4b0d-8064-56e886aa09d5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f3e88cb1-f360-4b0d-8064-56e886aa09d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f3e88cb1-f360-4b0d-8064-56e886aa09d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003203581s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-772451 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdany-port1774930597/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "356.130641ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.43765ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-772451 docker-env) && out/minikube-linux-amd64 status -p functional-772451"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-772451 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdspecific-port3516607187/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.933671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdspecific-port3516607187/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh "sudo umount -f /mount-9p": exit status 1 (291.866185ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-772451 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdspecific-port3516607187/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service list -o json
functional_test.go:1494: Took "533.397172ms" to run "out/minikube-linux-amd64 -p functional-772451 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30303
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30303
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-772451 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-772451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup836209524/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 73187: os: process already finished
helpers_test.go:508: unable to kill pid 72968: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-772451 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a6eeb0b1-fc0b-4eec-8577-ab3a87168ad8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a6eeb0b1-fc0b-4eec-8577-ab3a87168ad8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.037506265s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-772451 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-772451
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-772451
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-772451 image ls --format short --alsologtostderr:
I0917 17:12:29.041734   75542 out.go:345] Setting OutFile to fd 1 ...
I0917 17:12:29.041850   75542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.041860   75542 out.go:358] Setting ErrFile to fd 2...
I0917 17:12:29.041864   75542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.042038   75542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
I0917 17:12:29.042595   75542 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.042692   75542 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.043066   75542 cli_runner.go:164] Run: docker container inspect functional-772451 --format={{.State.Status}}
I0917 17:12:29.060310   75542 ssh_runner.go:195] Run: systemctl --version
I0917 17:12:29.060353   75542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-772451
I0917 17:12:29.076516   75542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/functional-772451/id_rsa Username:docker}
I0917 17:12:29.171204   75542 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-772451 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-772451 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-772451 | 35dbe28da6da3 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-772451 | d2874f28b707e | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-772451 image ls --format table --alsologtostderr:
I0917 17:12:34.105178   76292 out.go:345] Setting OutFile to fd 1 ...
I0917 17:12:34.105295   76292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:34.105304   76292 out.go:358] Setting ErrFile to fd 2...
I0917 17:12:34.105308   76292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:34.105471   76292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
I0917 17:12:34.106036   76292 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:34.106137   76292 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:34.106515   76292 cli_runner.go:164] Run: docker container inspect functional-772451 --format={{.State.Status}}
I0917 17:12:34.123910   76292 ssh_runner.go:195] Run: systemctl --version
I0917 17:12:34.123966   76292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-772451
I0917 17:12:34.142122   76292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/functional-772451/id_rsa Username:docker}
I0917 17:12:34.247908   76292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-772451 image ls --format json --alsologtostderr:
[{"id":"35dbe28da6da3b1ac14440ca7f7b073e8561987f55a39ab5967db0ab30fb21ad","repoDigests":[],"repoTags":["localhost/my-image:functional-772451"],"size":"1240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-772451"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"d2874f28b707e5e436be7ac5eed03380009c693d958c5c164f833e6a0c5d5b3b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-772451"],"size":"30"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDi
gests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"i
d":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-772451 image ls --format json --alsologtostderr:
I0917 17:12:33.828080   76148 out.go:345] Setting OutFile to fd 1 ...
I0917 17:12:33.828368   76148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:33.828380   76148 out.go:358] Setting ErrFile to fd 2...
I0917 17:12:33.828387   76148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:33.828689   76148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
I0917 17:12:33.829551   76148 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:33.829700   76148 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:33.830257   76148 cli_runner.go:164] Run: docker container inspect functional-772451 --format={{.State.Status}}
I0917 17:12:33.861942   76148 ssh_runner.go:195] Run: systemctl --version
I0917 17:12:33.861999   76148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-772451
I0917 17:12:33.882002   76148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/functional-772451/id_rsa Username:docker}
I0917 17:12:33.996001   76148 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-772451 image ls --format yaml --alsologtostderr:
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: d2874f28b707e5e436be7ac5eed03380009c693d958c5c164f833e6a0c5d5b3b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-772451
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-772451
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-772451 image ls --format yaml --alsologtostderr:
I0917 17:12:29.245203   75595 out.go:345] Setting OutFile to fd 1 ...
I0917 17:12:29.245351   75595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.245363   75595 out.go:358] Setting ErrFile to fd 2...
I0917 17:12:29.245367   75595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.245645   75595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
I0917 17:12:29.246305   75595 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.246418   75595 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.246762   75595 cli_runner.go:164] Run: docker container inspect functional-772451 --format={{.State.Status}}
I0917 17:12:29.265015   75595 ssh_runner.go:195] Run: systemctl --version
I0917 17:12:29.265062   75595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-772451
I0917 17:12:29.282739   75595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/functional-772451/id_rsa Username:docker}
I0917 17:12:29.375307   75595 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-772451 ssh pgrep buildkitd: exit status 1 (233.173119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image build -t localhost/my-image:functional-772451 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-772451 image build -t localhost/my-image:functional-772451 testdata/build --alsologtostderr: (3.927843533s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-772451 image build -t localhost/my-image:functional-772451 testdata/build --alsologtostderr:
I0917 17:12:29.679139   75771 out.go:345] Setting OutFile to fd 1 ...
I0917 17:12:29.679281   75771 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.679290   75771 out.go:358] Setting ErrFile to fd 2...
I0917 17:12:29.679294   75771 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:12:29.679470   75771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
I0917 17:12:29.680044   75771 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.680593   75771 config.go:182] Loaded profile config "functional-772451": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:12:29.680968   75771 cli_runner.go:164] Run: docker container inspect functional-772451 --format={{.State.Status}}
I0917 17:12:29.698776   75771 ssh_runner.go:195] Run: systemctl --version
I0917 17:12:29.698826   75771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-772451
I0917 17:12:29.716408   75771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/functional-772451/id_rsa Username:docker}
I0917 17:12:29.807065   75771 build_images.go:161] Building image from path: /tmp/build.4223049269.tar
I0917 17:12:29.807123   75771 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 17:12:29.815248   75771 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4223049269.tar
I0917 17:12:29.818055   75771 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4223049269.tar: stat -c "%s %y" /var/lib/minikube/build/build.4223049269.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4223049269.tar': No such file or directory
I0917 17:12:29.818080   75771 ssh_runner.go:362] scp /tmp/build.4223049269.tar --> /var/lib/minikube/build/build.4223049269.tar (3072 bytes)
I0917 17:12:29.840004   75771 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4223049269
I0917 17:12:29.847593   75771 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4223049269 -xf /var/lib/minikube/build/build.4223049269.tar
I0917 17:12:29.856316   75771 docker.go:360] Building image: /var/lib/minikube/build/build.4223049269
I0917 17:12:29.856382   75771 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-772451 /var/lib/minikube/build/build.4223049269
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.4s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:35dbe28da6da3b1ac14440ca7f7b073e8561987f55a39ab5967db0ab30fb21ad done
#8 naming to localhost/my-image:functional-772451 done
#8 DONE 0.0s
I0917 17:12:33.539607   75771 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-772451 /var/lib/minikube/build/build.4223049269: (3.683199353s)
I0917 17:12:33.539679   75771 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4223049269
I0917 17:12:33.548932   75771 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4223049269.tar
I0917 17:12:33.556904   75771 build_images.go:217] Built localhost/my-image:functional-772451 from /tmp/build.4223049269.tar
I0917 17:12:33.556932   75771 build_images.go:133] succeeded building to: functional-772451
I0917 17:12:33.556938   75771 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.93554106s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-772451
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image load --daemon kicbase/echo-server:functional-772451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image load --daemon kicbase/echo-server:functional-772451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-772451
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image load --daemon kicbase/echo-server:functional-772451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image save kicbase/echo-server:functional-772451 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image rm kicbase/echo-server:functional-772451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-772451
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-772451 image save --daemon kicbase/echo-server:functional-772451 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-772451
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-772451 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.38.114 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-772451 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-772451
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-772451
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-772451
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (98.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093989 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-093989 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.109542445s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (98.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- rollout status deployment/busybox
E0917 17:14:35.032330   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.039209   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.050632   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.072003   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.113378   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.194841   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.356553   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:35.678473   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:36.320206   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-093989 -- rollout status deployment/busybox: (4.267530374s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-c484w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-k4fx7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-qgj6j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-c484w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-k4fx7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-qgj6j -- nslookup kubernetes.default
E0917 17:14:37.601506   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-c484w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-k4fx7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-qgj6j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-c484w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-c484w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-k4fx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-k4fx7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-qgj6j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093989 -- exec busybox-7dff88458-qgj6j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-093989 -v=7 --alsologtostderr
E0917 17:14:40.163502   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:45.285774   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:55.527112   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-093989 -v=7 --alsologtostderr: (22.088754818s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-093989 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp testdata/cp-test.txt ha-093989:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2987391957/001/cp-test_ha-093989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989:/home/docker/cp-test.txt ha-093989-m02:/home/docker/cp-test_ha-093989_ha-093989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test_ha-093989_ha-093989-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989:/home/docker/cp-test.txt ha-093989-m03:/home/docker/cp-test_ha-093989_ha-093989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test_ha-093989_ha-093989-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989:/home/docker/cp-test.txt ha-093989-m04:/home/docker/cp-test_ha-093989_ha-093989-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test_ha-093989_ha-093989-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp testdata/cp-test.txt ha-093989-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2987391957/001/cp-test_ha-093989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m02:/home/docker/cp-test.txt ha-093989:/home/docker/cp-test_ha-093989-m02_ha-093989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test_ha-093989-m02_ha-093989.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m02:/home/docker/cp-test.txt ha-093989-m03:/home/docker/cp-test_ha-093989-m02_ha-093989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test_ha-093989-m02_ha-093989-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m02:/home/docker/cp-test.txt ha-093989-m04:/home/docker/cp-test_ha-093989-m02_ha-093989-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test_ha-093989-m02_ha-093989-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp testdata/cp-test.txt ha-093989-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2987391957/001/cp-test_ha-093989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m03:/home/docker/cp-test.txt ha-093989:/home/docker/cp-test_ha-093989-m03_ha-093989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test_ha-093989-m03_ha-093989.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m03:/home/docker/cp-test.txt ha-093989-m02:/home/docker/cp-test_ha-093989-m03_ha-093989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test_ha-093989-m03_ha-093989-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m03:/home/docker/cp-test.txt ha-093989-m04:/home/docker/cp-test_ha-093989-m03_ha-093989-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test_ha-093989-m03_ha-093989-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp testdata/cp-test.txt ha-093989-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2987391957/001/cp-test_ha-093989-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m04:/home/docker/cp-test.txt ha-093989:/home/docker/cp-test_ha-093989-m04_ha-093989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test.txt"
E0917 17:15:16.009044   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989 "sudo cat /home/docker/cp-test_ha-093989-m04_ha-093989.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m04:/home/docker/cp-test.txt ha-093989-m02:/home/docker/cp-test_ha-093989-m04_ha-093989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m02 "sudo cat /home/docker/cp-test_ha-093989-m04_ha-093989-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 cp ha-093989-m04:/home/docker/cp-test.txt ha-093989-m03:/home/docker/cp-test_ha-093989-m04_ha-093989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 ssh -n ha-093989-m03 "sudo cat /home/docker/cp-test_ha-093989-m04_ha-093989-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-093989 node stop m02 -v=7 --alsologtostderr: (10.681673907s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr: exit status 7 (654.64836ms)

                                                
                                                
-- stdout --
	ha-093989
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-093989-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093989-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-093989-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:15:29.027599  103670 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:15:29.027813  103670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:29.027823  103670 out.go:358] Setting ErrFile to fd 2...
	I0917 17:15:29.027828  103670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:29.027990  103670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:15:29.028158  103670 out.go:352] Setting JSON to false
	I0917 17:15:29.028187  103670 mustload.go:65] Loading cluster: ha-093989
	I0917 17:15:29.028238  103670 notify.go:220] Checking for updates...
	I0917 17:15:29.028752  103670 config.go:182] Loaded profile config "ha-093989": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:15:29.028773  103670 status.go:255] checking status of ha-093989 ...
	I0917 17:15:29.029279  103670 cli_runner.go:164] Run: docker container inspect ha-093989 --format={{.State.Status}}
	I0917 17:15:29.046160  103670 status.go:330] ha-093989 host status = "Running" (err=<nil>)
	I0917 17:15:29.046178  103670 host.go:66] Checking if "ha-093989" exists ...
	I0917 17:15:29.046401  103670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093989
	I0917 17:15:29.062161  103670 host.go:66] Checking if "ha-093989" exists ...
	I0917 17:15:29.062424  103670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:15:29.062462  103670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093989
	I0917 17:15:29.079708  103670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/ha-093989/id_rsa Username:docker}
	I0917 17:15:29.176124  103670 ssh_runner.go:195] Run: systemctl --version
	I0917 17:15:29.180143  103670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:15:29.190392  103670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:15:29.241307  103670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-17 17:15:29.231423316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 17:15:29.241829  103670 kubeconfig.go:125] found "ha-093989" server: "https://192.168.49.254:8443"
	I0917 17:15:29.241857  103670 api_server.go:166] Checking apiserver status ...
	I0917 17:15:29.241889  103670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:15:29.253256  103670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2396/cgroup
	I0917 17:15:29.261940  103670 api_server.go:182] apiserver freezer: "7:freezer:/docker/9a21f9eada738f3486f4665d71126e68627d4d39936e1907efa340bae228c339/kubepods/burstable/pod6ab393a6ef215b6ad59ae7a26390eaba/373699f936ad7549f76990e819e350e475a28b667bd5dad0d46be4f742b1312d"
	I0917 17:15:29.262038  103670 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9a21f9eada738f3486f4665d71126e68627d4d39936e1907efa340bae228c339/kubepods/burstable/pod6ab393a6ef215b6ad59ae7a26390eaba/373699f936ad7549f76990e819e350e475a28b667bd5dad0d46be4f742b1312d/freezer.state
	I0917 17:15:29.269747  103670 api_server.go:204] freezer state: "THAWED"
	I0917 17:15:29.269777  103670 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:15:29.274913  103670 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:15:29.274944  103670 status.go:422] ha-093989 apiserver status = Running (err=<nil>)
	I0917 17:15:29.274959  103670 status.go:257] ha-093989 status: &{Name:ha-093989 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:15:29.275012  103670 status.go:255] checking status of ha-093989-m02 ...
	I0917 17:15:29.275368  103670 cli_runner.go:164] Run: docker container inspect ha-093989-m02 --format={{.State.Status}}
	I0917 17:15:29.293162  103670 status.go:330] ha-093989-m02 host status = "Stopped" (err=<nil>)
	I0917 17:15:29.293201  103670 status.go:343] host is not running, skipping remaining checks
	I0917 17:15:29.293210  103670 status.go:257] ha-093989-m02 status: &{Name:ha-093989-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:15:29.293235  103670 status.go:255] checking status of ha-093989-m03 ...
	I0917 17:15:29.293507  103670 cli_runner.go:164] Run: docker container inspect ha-093989-m03 --format={{.State.Status}}
	I0917 17:15:29.310393  103670 status.go:330] ha-093989-m03 host status = "Running" (err=<nil>)
	I0917 17:15:29.310415  103670 host.go:66] Checking if "ha-093989-m03" exists ...
	I0917 17:15:29.310652  103670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093989-m03
	I0917 17:15:29.327811  103670 host.go:66] Checking if "ha-093989-m03" exists ...
	I0917 17:15:29.328041  103670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:15:29.328077  103670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093989-m03
	I0917 17:15:29.345111  103670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/ha-093989-m03/id_rsa Username:docker}
	I0917 17:15:29.439817  103670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:15:29.450306  103670 kubeconfig.go:125] found "ha-093989" server: "https://192.168.49.254:8443"
	I0917 17:15:29.450331  103670 api_server.go:166] Checking apiserver status ...
	I0917 17:15:29.450362  103670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:15:29.461243  103670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2237/cgroup
	I0917 17:15:29.470103  103670 api_server.go:182] apiserver freezer: "7:freezer:/docker/b2991c4269ca49a2e6e67f641b4088d4df485bccd6294eaaf418211a58912ace/kubepods/burstable/pod7c636eb2e42b0c6daf13a054b5acd6d4/ae682478c9d64fd7aaaf5137b1776da56e09e29ce38410b1b67343579fee7dc6"
	I0917 17:15:29.470179  103670 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b2991c4269ca49a2e6e67f641b4088d4df485bccd6294eaaf418211a58912ace/kubepods/burstable/pod7c636eb2e42b0c6daf13a054b5acd6d4/ae682478c9d64fd7aaaf5137b1776da56e09e29ce38410b1b67343579fee7dc6/freezer.state
	I0917 17:15:29.477696  103670 api_server.go:204] freezer state: "THAWED"
	I0917 17:15:29.477721  103670 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:15:29.481268  103670 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:15:29.481287  103670 status.go:422] ha-093989-m03 apiserver status = Running (err=<nil>)
	I0917 17:15:29.481296  103670 status.go:257] ha-093989-m03 status: &{Name:ha-093989-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:15:29.481309  103670 status.go:255] checking status of ha-093989-m04 ...
	I0917 17:15:29.481523  103670 cli_runner.go:164] Run: docker container inspect ha-093989-m04 --format={{.State.Status}}
	I0917 17:15:29.498020  103670 status.go:330] ha-093989-m04 host status = "Running" (err=<nil>)
	I0917 17:15:29.498043  103670 host.go:66] Checking if "ha-093989-m04" exists ...
	I0917 17:15:29.498313  103670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093989-m04
	I0917 17:15:29.517949  103670 host.go:66] Checking if "ha-093989-m04" exists ...
	I0917 17:15:29.518189  103670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:15:29.518222  103670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093989-m04
	I0917 17:15:29.535198  103670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/ha-093989-m04/id_rsa Username:docker}
	I0917 17:15:29.628667  103670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:15:29.638805  103670 status.go:257] ha-093989-m04 status: &{Name:ha-093989-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 node start m02 -v=7 --alsologtostderr
E0917 17:15:56.970850   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-093989 node start m02 -v=7 --alsologtostderr: (34.64631325s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.188462178s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-093989 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-093989 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-093989 -v=7 --alsologtostderr: (33.507148183s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093989 --wait=true -v=7 --alsologtostderr
E0917 17:17:05.683060   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:05.689409   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:05.700748   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:05.722095   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:05.763489   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:05.844921   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:06.006434   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:06.328129   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:06.970256   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:08.252818   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:10.814182   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:15.936203   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:18.892413   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:26.177715   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:46.659726   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:18:27.621259   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:35.032501   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:49.543584   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:20:02.734569   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-093989 --wait=true -v=7 --alsologtostderr: (3m27.538375588s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-093989
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-093989 node delete m03 -v=7 --alsologtostderr: (8.518226925s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-093989 stop -v=7 --alsologtostderr: (32.281388849s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr: exit status 7 (95.374057ms)

                                                
                                                
-- stdout --
	ha-093989
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093989-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093989-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:21:05.117770  134973 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:21:05.117909  134973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:21:05.117919  134973 out.go:358] Setting ErrFile to fd 2...
	I0917 17:21:05.117926  134973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:21:05.118121  134973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:21:05.118299  134973 out.go:352] Setting JSON to false
	I0917 17:21:05.118339  134973 mustload.go:65] Loading cluster: ha-093989
	I0917 17:21:05.118391  134973 notify.go:220] Checking for updates...
	I0917 17:21:05.118770  134973 config.go:182] Loaded profile config "ha-093989": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:21:05.118787  134973 status.go:255] checking status of ha-093989 ...
	I0917 17:21:05.119278  134973 cli_runner.go:164] Run: docker container inspect ha-093989 --format={{.State.Status}}
	I0917 17:21:05.136690  134973 status.go:330] ha-093989 host status = "Stopped" (err=<nil>)
	I0917 17:21:05.136717  134973 status.go:343] host is not running, skipping remaining checks
	I0917 17:21:05.136726  134973 status.go:257] ha-093989 status: &{Name:ha-093989 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:21:05.136769  134973 status.go:255] checking status of ha-093989-m02 ...
	I0917 17:21:05.137113  134973 cli_runner.go:164] Run: docker container inspect ha-093989-m02 --format={{.State.Status}}
	I0917 17:21:05.154651  134973 status.go:330] ha-093989-m02 host status = "Stopped" (err=<nil>)
	I0917 17:21:05.154683  134973 status.go:343] host is not running, skipping remaining checks
	I0917 17:21:05.154693  134973 status.go:257] ha-093989-m02 status: &{Name:ha-093989-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:21:05.154716  134973 status.go:255] checking status of ha-093989-m04 ...
	I0917 17:21:05.154944  134973 cli_runner.go:164] Run: docker container inspect ha-093989-m04 --format={{.State.Status}}
	I0917 17:21:05.172671  134973 status.go:330] ha-093989-m04 host status = "Stopped" (err=<nil>)
	I0917 17:21:05.172691  134973 status.go:343] host is not running, skipping remaining checks
	I0917 17:21:05.172698  134973 status.go:257] ha-093989-m04 status: &{Name:ha-093989-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093989 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:22:05.682668   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-093989 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.917606762s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-093989 --control-plane -v=7 --alsologtostderr
E0917 17:22:33.385288   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-093989 --control-plane -v=7 --alsologtostderr: (37.078980212s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-093989 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-508233 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-508233 --driver=docker  --container-runtime=docker: (21.794736054s)
--- PASS: TestImageBuild/serial/Setup (21.79s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-508233
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-508233: (2.573536726s)
--- PASS: TestImageBuild/serial/NormalBuild (2.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-508233
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-508233
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-508233
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-558276 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-558276 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (36.451295218s)
--- PASS: TestJSONOutput/start/Command (36.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-558276 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-558276 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-558276 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-558276 --output=json --user=testUser: (5.747142467s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-806665 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-806665 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.099061ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0e5d0da-caca-4ddc-b970-36856f462b22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-806665] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d89fb3e-aa78-4dcd-a110-63a8df56f670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"370504f7-0366-4e33-99f7-4397c8839e3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b8bb1fb-f420-47e0-8e01-5765cc8cde86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig"}}
	{"specversion":"1.0","id":"dd53940c-04f8-4ebf-9092-93c1db2a755b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube"}}
	{"specversion":"1.0","id":"e4a61633-95e8-48e3-8690-2f1f230689b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d4879603-898e-4d51-ae65-a7504495c553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d63c7782-e724-46f4-a223-e056c26edc62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-806665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-806665
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-479915 --network=
E0917 17:24:35.031990   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-479915 --network=: (21.627545343s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-479915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-479915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-479915: (2.050608671s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-053583 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-053583 --network=bridge: (23.631659424s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-053583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-053583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-053583: (1.900242442s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.55s)

                                                
                                    
x
+
TestKicExistingNetwork (24.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-261547 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-261547 --network=existing-network: (22.735587916s)
helpers_test.go:175: Cleaning up "existing-network-261547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-261547
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-261547: (1.811560262s)
--- PASS: TestKicExistingNetwork (24.69s)

                                                
                                    
x
+
TestKicCustomSubnet (23.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-772181 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-772181 --subnet=192.168.60.0/24: (21.324648384s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-772181 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-772181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-772181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-772181: (1.984876882s)
--- PASS: TestKicCustomSubnet (23.33s)

                                                
                                    
x
+
TestKicStaticIP (22.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-751048 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-751048 --static-ip=192.168.200.200: (20.887986536s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-751048 ip
helpers_test.go:175: Cleaning up "static-ip-751048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-751048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-751048: (1.95010181s)
--- PASS: TestKicStaticIP (22.96s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-428842 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-428842 --driver=docker  --container-runtime=docker: (20.488566838s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-444232 --driver=docker  --container-runtime=docker
E0917 17:27:05.683032   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-444232 --driver=docker  --container-runtime=docker: (23.778439365s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-428842
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-444232
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-444232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-444232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-444232: (2.030865287s)
helpers_test.go:175: Cleaning up "first-428842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-428842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-428842: (2.00441473s)
--- PASS: TestMinikubeProfile (49.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-505982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-505982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.163636103s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-505982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-517057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-517057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.182649749s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-505982 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-505982 --alsologtostderr -v=5: (1.434772262s)
--- PASS: TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-517057
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-517057: (1.170169593s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-517057
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-517057: (7.358062391s)
--- PASS: TestMountStart/serial/RestartStopped (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.202452471s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-245754 -- rollout status deployment/busybox: (3.519692504s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-lddbq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-vk7t6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-lddbq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-vk7t6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-lddbq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-vk7t6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-lddbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-lddbq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-vk7t6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245754 -- exec busybox-7dff88458-vk7t6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-245754 -v 3 --alsologtostderr
E0917 17:29:35.032526   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-245754 -v 3 --alsologtostderr: (14.118391424s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-245754 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp testdata/cp-test.txt multinode-245754:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2540570779/001/cp-test_multinode-245754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754:/home/docker/cp-test.txt multinode-245754-m02:/home/docker/cp-test_multinode-245754_multinode-245754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test_multinode-245754_multinode-245754-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754:/home/docker/cp-test.txt multinode-245754-m03:/home/docker/cp-test_multinode-245754_multinode-245754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test_multinode-245754_multinode-245754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp testdata/cp-test.txt multinode-245754-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2540570779/001/cp-test_multinode-245754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m02:/home/docker/cp-test.txt multinode-245754:/home/docker/cp-test_multinode-245754-m02_multinode-245754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test_multinode-245754-m02_multinode-245754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m02:/home/docker/cp-test.txt multinode-245754-m03:/home/docker/cp-test_multinode-245754-m02_multinode-245754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test_multinode-245754-m02_multinode-245754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp testdata/cp-test.txt multinode-245754-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2540570779/001/cp-test_multinode-245754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m03:/home/docker/cp-test.txt multinode-245754:/home/docker/cp-test_multinode-245754-m03_multinode-245754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754 "sudo cat /home/docker/cp-test_multinode-245754-m03_multinode-245754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 cp multinode-245754-m03:/home/docker/cp-test.txt multinode-245754-m02:/home/docker/cp-test_multinode-245754-m03_multinode-245754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 ssh -n multinode-245754-m02 "sudo cat /home/docker/cp-test_multinode-245754-m03_multinode-245754-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-245754 node stop m03: (1.17224922s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245754 status: exit status 7 (446.574366ms)

                                                
                                                
-- stdout --
	multinode-245754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-245754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-245754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr: exit status 7 (459.831056ms)

                                                
                                                
-- stdout --
	multinode-245754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-245754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-245754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:29:47.612834  221456 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:29:47.612931  221456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:29:47.612938  221456 out.go:358] Setting ErrFile to fd 2...
	I0917 17:29:47.612942  221456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:29:47.613119  221456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:29:47.613283  221456 out.go:352] Setting JSON to false
	I0917 17:29:47.613311  221456 mustload.go:65] Loading cluster: multinode-245754
	I0917 17:29:47.613347  221456 notify.go:220] Checking for updates...
	I0917 17:29:47.613708  221456 config.go:182] Loaded profile config "multinode-245754": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:29:47.613722  221456 status.go:255] checking status of multinode-245754 ...
	I0917 17:29:47.614089  221456 cli_runner.go:164] Run: docker container inspect multinode-245754 --format={{.State.Status}}
	I0917 17:29:47.636708  221456 status.go:330] multinode-245754 host status = "Running" (err=<nil>)
	I0917 17:29:47.636751  221456 host.go:66] Checking if "multinode-245754" exists ...
	I0917 17:29:47.637028  221456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-245754
	I0917 17:29:47.654178  221456 host.go:66] Checking if "multinode-245754" exists ...
	I0917 17:29:47.654446  221456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:29:47.654492  221456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-245754
	I0917 17:29:47.671354  221456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/multinode-245754/id_rsa Username:docker}
	I0917 17:29:47.763806  221456 ssh_runner.go:195] Run: systemctl --version
	I0917 17:29:47.767529  221456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:29:47.777417  221456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:29:47.823369  221456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-17 17:29:47.814555796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 17:29:47.824010  221456 kubeconfig.go:125] found "multinode-245754" server: "https://192.168.67.2:8443"
	I0917 17:29:47.824051  221456 api_server.go:166] Checking apiserver status ...
	I0917 17:29:47.824098  221456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:29:47.834531  221456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2346/cgroup
	I0917 17:29:47.843057  221456 api_server.go:182] apiserver freezer: "7:freezer:/docker/c51bb5c243b35dade62c8988c884a15d6d7b26851e4cd859de1c9792f05a4085/kubepods/burstable/pod0c8418a5107e336e92bf9a940fae2f02/240154c06a23408a5409cef56cae83db12fd3d3b2301c67af6dbff15b343420a"
	I0917 17:29:47.843112  221456 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c51bb5c243b35dade62c8988c884a15d6d7b26851e4cd859de1c9792f05a4085/kubepods/burstable/pod0c8418a5107e336e92bf9a940fae2f02/240154c06a23408a5409cef56cae83db12fd3d3b2301c67af6dbff15b343420a/freezer.state
	I0917 17:29:47.850435  221456 api_server.go:204] freezer state: "THAWED"
	I0917 17:29:47.850463  221456 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 17:29:47.854899  221456 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 17:29:47.854923  221456 status.go:422] multinode-245754 apiserver status = Running (err=<nil>)
	I0917 17:29:47.854937  221456 status.go:257] multinode-245754 status: &{Name:multinode-245754 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:29:47.854956  221456 status.go:255] checking status of multinode-245754-m02 ...
	I0917 17:29:47.855285  221456 cli_runner.go:164] Run: docker container inspect multinode-245754-m02 --format={{.State.Status}}
	I0917 17:29:47.872388  221456 status.go:330] multinode-245754-m02 host status = "Running" (err=<nil>)
	I0917 17:29:47.872413  221456 host.go:66] Checking if "multinode-245754-m02" exists ...
	I0917 17:29:47.872712  221456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-245754-m02
	I0917 17:29:47.890832  221456 host.go:66] Checking if "multinode-245754-m02" exists ...
	I0917 17:29:47.891177  221456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:29:47.891228  221456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-245754-m02
	I0917 17:29:47.909147  221456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19662-12004/.minikube/machines/multinode-245754-m02/id_rsa Username:docker}
	I0917 17:29:48.003864  221456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:29:48.014142  221456 status.go:257] multinode-245754-m02 status: &{Name:multinode-245754-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:29:48.014183  221456 status.go:255] checking status of multinode-245754-m03 ...
	I0917 17:29:48.014438  221456 cli_runner.go:164] Run: docker container inspect multinode-245754-m03 --format={{.State.Status}}
	I0917 17:29:48.031643  221456 status.go:330] multinode-245754-m03 host status = "Stopped" (err=<nil>)
	I0917 17:29:48.031665  221456 status.go:343] host is not running, skipping remaining checks
	I0917 17:29:48.031677  221456 status.go:257] multinode-245754-m03 status: &{Name:multinode-245754-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-245754 node start m03 -v=7 --alsologtostderr: (9.028052431s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245754
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-245754
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-245754: (22.411757723s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245754 --wait=true -v=8 --alsologtostderr
E0917 17:30:58.096596   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245754 --wait=true -v=8 --alsologtostderr: (1m10.84712237s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245754
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-245754 node delete m03: (4.597786385s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-245754 stop: (21.095600305s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245754 status: exit status 7 (76.859312ms)

                                                
                                                
-- stdout --
	multinode-245754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-245754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr: exit status 7 (76.405323ms)

                                                
                                                
-- stdout --
	multinode-245754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-245754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:31:57.417999  236707 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:31:57.418260  236707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:31:57.418269  236707 out.go:358] Setting ErrFile to fd 2...
	I0917 17:31:57.418276  236707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:31:57.418458  236707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-12004/.minikube/bin
	I0917 17:31:57.418641  236707 out.go:352] Setting JSON to false
	I0917 17:31:57.418673  236707 mustload.go:65] Loading cluster: multinode-245754
	I0917 17:31:57.418764  236707 notify.go:220] Checking for updates...
	I0917 17:31:57.419164  236707 config.go:182] Loaded profile config "multinode-245754": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:31:57.419182  236707 status.go:255] checking status of multinode-245754 ...
	I0917 17:31:57.419601  236707 cli_runner.go:164] Run: docker container inspect multinode-245754 --format={{.State.Status}}
	I0917 17:31:57.436684  236707 status.go:330] multinode-245754 host status = "Stopped" (err=<nil>)
	I0917 17:31:57.436703  236707 status.go:343] host is not running, skipping remaining checks
	I0917 17:31:57.436710  236707 status.go:257] multinode-245754 status: &{Name:multinode-245754 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:31:57.436735  236707 status.go:255] checking status of multinode-245754-m02 ...
	I0917 17:31:57.436962  236707 cli_runner.go:164] Run: docker container inspect multinode-245754-m02 --format={{.State.Status}}
	I0917 17:31:57.453474  236707 status.go:330] multinode-245754-m02 host status = "Stopped" (err=<nil>)
	I0917 17:31:57.453494  236707 status.go:343] host is not running, skipping remaining checks
	I0917 17:31:57.453500  236707 status.go:257] multinode-245754-m02 status: &{Name:multinode-245754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245754 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:32:05.683115   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245754 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (48.265381684s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245754 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245754
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245754-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-245754-m02 --driver=docker  --container-runtime=docker: exit status 14 (62.900586ms)

                                                
                                                
-- stdout --
	* [multinode-245754-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-245754-m02' is duplicated with machine name 'multinode-245754-m02' in profile 'multinode-245754'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245754-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245754-m03 --driver=docker  --container-runtime=docker: (21.354797174s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-245754
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-245754: exit status 80 (263.22095ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-245754 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-245754-m03 already exists in multinode-245754-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-245754-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-245754-m03: (1.927335177s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.65s)

                                                
                                    
x
+
TestPreload (104.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-387145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0917 17:33:28.746879   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-387145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (56.578950858s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-387145 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-387145 image pull gcr.io/k8s-minikube/busybox: (2.118364426s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-387145
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-387145: (10.643106254s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-387145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0917 17:34:35.032698   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-387145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (33.170433591s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-387145 image list
helpers_test.go:175: Cleaning up "test-preload-387145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-387145
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-387145: (2.155779075s)
--- PASS: TestPreload (104.87s)

                                                
                                    
x
+
TestScheduledStopUnix (94.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-484529 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-484529 --memory=2048 --driver=docker  --container-runtime=docker: (21.498681627s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-484529 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-484529 -n scheduled-stop-484529
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-484529 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-484529 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-484529 -n scheduled-stop-484529
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-484529
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-484529 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-484529
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-484529: exit status 7 (61.742623ms)

                                                
                                                
-- stdout --
	scheduled-stop-484529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-484529 -n scheduled-stop-484529
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-484529 -n scheduled-stop-484529: exit status 7 (60.537421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-484529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-484529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-484529: (1.600274933s)
--- PASS: TestScheduledStopUnix (94.38s)

                                                
                                    
x
+
TestSkaffold (103.16s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe831227550 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-388525 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-388525 --memory=2600 --driver=docker  --container-runtime=docker: (21.750126196s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe831227550 run --minikube-profile skaffold-388525 --kube-context skaffold-388525 --status-check=true --port-forward=false --interactive=false
E0917 17:37:05.682392   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe831227550 run --minikube-profile skaffold-388525 --kube-context skaffold-388525 --status-check=true --port-forward=false --interactive=false: (1m4.447539537s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5cb69bf57c-rh8tf" [69c7616d-aefd-4528-bf96-3d0e23b78f15] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003811357s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6b47595d97-t5998" [ea258790-c210-4860-a40c-da8522df02c2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002980562s
helpers_test.go:175: Cleaning up "skaffold-388525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-388525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-388525: (2.699339313s)
--- PASS: TestSkaffold (103.16s)

                                                
                                    
x
+
TestInsufficientStorage (12.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-229002 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-229002 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.431237836s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63cf8e00-0993-4a56-967b-bd0ad1d5725a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-229002] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9e3b66b-224e-4ff2-94d5-5fea8df06976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"13dd5517-20a4-4d61-a1ef-365e503686a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12207c0d-c8a3-4002-9d68-27a659ecbea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig"}}
	{"specversion":"1.0","id":"b98dcd2b-3c3b-4c2b-b527-c6c07c7a26e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube"}}
	{"specversion":"1.0","id":"f4e3deeb-255b-412d-8822-dd2ac3d93f95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dbdeb706-e2af-4eee-908e-7d2d170661dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87bbd4e3-4e08-4db7-9472-a8d70cdaf178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d5a4d5d9-5bf2-4d10-a43f-f20174b0fe8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"36fa73f4-070c-4ac9-a780-d6fa0a445e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"24a14754-d1ed-4920-9cb1-34dcf00dd01d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"567664e5-03e3-409c-9d77-249e1a81828d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-229002\" primary control-plane node in \"insufficient-storage-229002\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10af996-b421-43d9-9d65-668945bdda81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"61db926a-15cc-47d3-ab3d-7442727f3ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"41dd016c-74bd-44ec-bc10-aba71ae49d09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-229002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-229002 --output=json --layout=cluster: exit status 7 (250.750616ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-229002","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-229002","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:38:26.748377  276555 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-229002" does not appear in /home/jenkins/minikube-integration/19662-12004/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-229002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-229002 --output=json --layout=cluster: exit status 7 (251.845487ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-229002","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-229002","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:38:26.999971  276655 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-229002" does not appear in /home/jenkins/minikube-integration/19662-12004/kubeconfig
	E0917 17:38:27.009922  276655 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/insufficient-storage-229002/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-229002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-229002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-229002: (1.611284702s)
--- PASS: TestInsufficientStorage (12.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4250744454 start -p running-upgrade-469471 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4250744454 start -p running-upgrade-469471 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.711320378s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-469471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 17:43:22.854068   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-469471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.626574416s)
helpers_test.go:175: Cleaning up "running-upgrade-469471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-469471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-469471: (2.059365095s)
--- PASS: TestRunningBinaryUpgrade (81.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (333.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.963785157s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-480047
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-480047: (6.001746795s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-480047 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-480047 status --format={{.Host}}: exit status 7 (85.291684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m29.16072749s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-480047 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (70.143287ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480047] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-480047
	    minikube start -p kubernetes-upgrade-480047 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4800472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-480047 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480047 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.604739276s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-480047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-480047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-480047: (2.299867467s)
--- PASS: TestKubernetesUpgrade (333.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.358755374 start -p missing-upgrade-627958 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.358755374 start -p missing-upgrade-627958 --memory=2200 --driver=docker  --container-runtime=docker: (1m57.219553886s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-627958
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-627958: (10.3651928s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-627958
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-627958 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-627958 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.372054021s)
helpers_test.go:175: Cleaning up "missing-upgrade-627958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-627958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-627958: (2.319974099s)
--- PASS: TestMissingContainerUpgrade (181.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (84.062916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-670773] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-12004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-12004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-670773 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-670773 --driver=docker  --container-runtime=docker: (31.840090457s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-670773 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --driver=docker  --container-runtime=docker: (15.242891469s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-670773 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-670773 status -o json: exit status 2 (320.342778ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-670773","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-670773
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-670773: (1.725991698s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-670773 --no-kubernetes --driver=docker  --container-runtime=docker: (6.754972937s)
--- PASS: TestNoKubernetes/serial/Start (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-670773 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-670773 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.719265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.460594535s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-670773
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-670773: (1.229739478s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-670773 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-670773 --driver=docker  --container-runtime=docker: (9.725235588s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-670773 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-670773 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.566391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (175.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.985611966 start -p stopped-upgrade-893129 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.985611966 start -p stopped-upgrade-893129 --memory=2200 --vm-driver=docker  --container-runtime=docker: (2m14.956551231s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.985611966 -p stopped-upgrade-893129 stop
E0917 17:42:05.682315   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.985611966 -p stopped-upgrade-893129 stop: (10.805033182s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-893129 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-893129 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.699631893s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (175.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-893129
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-893129: (1.352967143s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestPause/serial/Start (39.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587264 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-587264 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (39.389368567s)
--- PASS: TestPause/serial/Start (39.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0917 17:43:02.360276   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.366663   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.378038   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.399479   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.440925   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.522356   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:02.683957   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:03.005352   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:03.647333   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:04.929448   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:07.490912   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:43:12.612685   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m22.57551808s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587264 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 17:43:43.336385   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-587264 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.896825491s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-587264 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-587264 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-587264 --output=json --layout=cluster: exit status 2 (288.747068ms)

                                                
                                                
-- stdout --
	{"Name":"pause-587264","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-587264","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-587264 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-587264 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-587264 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-587264 --alsologtostderr -v=5: (2.196304125s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.755004108s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-587264
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-587264: exit status 1 (18.061889ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-587264: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.527462342s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-brvgg" [5cdc38f5-d05c-475f-b7f9-db7acb7f2a50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-brvgg" [5cdc38f5-d05c-475f-b7f9-db7acb7f2a50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004370493s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0917 17:44:24.297763   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m1.374679804s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (46.68736694s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4ps9j" [e69e1fc3-643c-4a8b-8b02-894114718290] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004589648s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x9bzv" [eb155031-3345-4f0a-be26-23344eef6dac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x9bzv" [eb155031-3345-4f0a-be26-23344eef6dac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.002862918s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wspjn" [54dde967-b81d-49fb-a277-d1be8efc37a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005460661s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m2vft" [0cfd03de-9917-4e47-a3e9-240d52e71165] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m2vft" [0cfd03de-9917-4e47-a3e9-240d52e71165] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004534148s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cws7x" [1cb48649-d4a7-4601-88fc-9861f80c82bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cws7x" [1cb48649-d4a7-4601-88fc-9861f80c82bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004033697s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (36.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (36.115696704s)
--- PASS: TestNetworkPlugins/group/false/Start (36.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0917 17:45:46.219888   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (40.537840045s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (46.185407953s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m4.672855392s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nnmbv" [857fe94c-540a-42e7-9004-3d87f3d2d719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nnmbv" [857fe94c-540a-42e7-9004-3d87f3d2d719] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003503242s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (26.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-194686 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-194686 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172518469s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-194686 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context false-194686 exec deployment/netcat -- nslookup kubernetes.default: (10.165557204s)
--- PASS: TestNetworkPlugins/group/false/DNS (26.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2cml4" [3adf6f71-1b5e-4372-b019-6daba1e61a35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2cml4" [3adf6f71-1b5e-4372-b019-6daba1e61a35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005066199s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4ql7l" [a77f9fda-4018-49be-a0db-6e38bfbe6830] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005333923s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (37.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-194686 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (37.202365507s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (37.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r2sj4" [13155291-d424-4da2-8121-e7c4e7ab551c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r2sj4" [13155291-d424-4da2-8121-e7c4e7ab551c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003448719s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-484551 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-484551 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m15.758776858s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jvtzr" [b16bf8ea-8526-41fa-8a5f-faae1ce309dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jvtzr" [b16bf8ea-8526-41fa-8a5f-faae1ce309dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004775876s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-492823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-492823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m13.935730832s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-194686 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-194686 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7r74" [dcd5b508-70cc-46f0-a2a2-1bef4e9ea64e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d7r74" [dcd5b508-70cc-46f0-a2a2-1bef4e9ea64e] Running
E0917 17:47:38.098564   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004741103s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-194686 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-194686 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)
E0917 17:53:16.086076   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (36.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-966407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-966407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (36.981409416s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (36.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-004975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 17:48:02.360450   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-004975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.911990977s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-966407 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99c0aeb8-3bf2-403f-80d7-10fad9d38be6] Pending
helpers_test.go:344: "busybox" [99c0aeb8-3bf2-403f-80d7-10fad9d38be6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99c0aeb8-3bf2-403f-80d7-10fad9d38be6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004770042s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-966407 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-966407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0917 17:48:30.061881   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-966407 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-966407 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-966407 --alsologtostderr -v=3: (10.682125936s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-004975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a70b206-b857-47dc-ae50-a3119020d344] Pending
helpers_test.go:344: "busybox" [6a70b206-b857-47dc-ae50-a3119020d344] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a70b206-b857-47dc-ae50-a3119020d344] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00333352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-004975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-492823 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a158c8d-7b04-4d25-a0f5-ce5fd4c81826] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9a158c8d-7b04-4d25-a0f5-ce5fd4c81826] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004109699s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-492823 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-966407 -n embed-certs-966407
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-966407 -n embed-certs-966407: exit status 7 (115.474612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-966407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (305.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-966407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-966407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (5m5.430103228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-966407 -n embed-certs-966407
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (305.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-004975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-004975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-492823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-492823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189339883s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-492823 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-004975 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-004975 --alsologtostderr -v=3: (10.787629551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-492823 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-492823 --alsologtostderr -v=3: (10.781898651s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975: exit status 7 (125.241677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-004975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-004975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-004975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.622575817s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-492823 -n no-preload-492823
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-492823 -n no-preload-492823: exit status 7 (162.972191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-492823 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-492823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 17:49:16.674939   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.681343   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.692782   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.714155   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.755489   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.836852   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:16.998329   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:17.320238   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:17.962785   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:19.244303   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:21.805600   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-492823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.857826045s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-492823 -n no-preload-492823
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-484551 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [164e8699-e13d-4755-a41e-0fd1d3e949fb] Pending
helpers_test.go:344: "busybox" [164e8699-e13d-4755-a41e-0fd1d3e949fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 17:49:26.927704   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [164e8699-e13d-4755-a41e-0fd1d3e949fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004294529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-484551 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-484551 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-484551 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-484551 --alsologtostderr -v=3
E0917 17:49:35.032624   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/addons-163060/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:49:37.170050   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-484551 --alsologtostderr -v=3: (10.779332529s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484551 -n old-k8s-version-484551
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484551 -n old-k8s-version-484551: exit status 7 (97.333849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-484551 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (130.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-484551 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0917 17:49:57.652273   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.458739   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.465134   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.476514   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.498084   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.539518   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.620923   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:03.782455   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:04.104201   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:04.745692   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:06.027930   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:08.590093   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:08.748637   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:13.711461   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.508968   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.515376   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.526728   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.548068   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.589472   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.670852   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:21.832342   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:22.154579   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:22.796292   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:23.953105   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:24.077557   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:26.639636   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:31.761585   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.225098   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.231488   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.242861   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.264238   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.305603   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.387208   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.548799   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:32.870403   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:33.512665   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:34.794792   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:37.356490   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:38.613661   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:42.003112   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:42.478480   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:44.435052   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:50:52.720263   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:02.484735   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.412698   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.419069   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.430393   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.451763   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.493128   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.574510   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:12.736044   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:13.057673   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:13.202159   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:13.699564   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:14.981560   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:17.543788   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:22.666041   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.265091   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.271456   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.283360   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.305220   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.346728   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.428044   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.589346   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:23.910595   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:24.552691   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:25.396656   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:25.834484   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:28.396115   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:32.908206   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:33.517535   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:43.446713   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:43.759543   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.060568   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.066934   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.078323   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.099710   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.141099   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.222533   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.384731   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:46.706455   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:47.348232   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:48.630628   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:51.192425   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:53.390226   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:54.164213   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/custom-flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-484551 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.66133646s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484551 -n old-k8s-version-484551
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (130.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mc9wf" [87bd0187-9e98-4f2b-b8eb-73e793410a5f] Running
E0917 17:51:56.314535   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:00.535882   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/auto-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003872618s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mc9wf" [87bd0187-9e98-4f2b-b8eb-73e793410a5f] Running
E0917 17:52:04.241037   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:05.682617   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/functional-772451/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:06.556721   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003514157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-484551 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-484551 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-484551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484551 -n old-k8s-version-484551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484551 -n old-k8s-version-484551: exit status 2 (290.41658ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484551 -n old-k8s-version-484551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484551 -n old-k8s-version-484551: exit status 2 (283.433878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-484551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484551 -n old-k8s-version-484551
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484551 -n old-k8s-version-484551
E0917 17:52:09.756526   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-388557 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 17:52:12.325686   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:14.887113   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:20.009300   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:27.038704   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.544372   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.550729   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.562072   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.584026   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.625408   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.706797   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:29.869075   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:30.190530   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:30.250938   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:30.832481   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:32.114038   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:34.352274   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:34.675954   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:39.797477   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-388557 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (31.253156213s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-388557 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-388557 --alsologtostderr -v=3
E0917 17:52:45.203211   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/enable-default-cni-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:47.317943   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kindnet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:50.039471   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:50.733047   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-388557 --alsologtostderr -v=3: (10.790343494s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-388557 -n newest-cni-388557
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-388557 -n newest-cni-388557: exit status 7 (108.406965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-388557 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-388557 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 17:53:02.359692   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/skaffold-388525/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:53:05.368512   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/calico-194686/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:53:08.000670   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/flannel-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-388557 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.363585842s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-388557 -n newest-cni-388557
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-388557 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-388557 --alsologtostderr -v=1
E0917 17:53:10.521325   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-388557 -n newest-cni-388557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-388557 -n newest-cni-388557: exit status 2 (299.570037ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-388557 -n newest-cni-388557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-388557 -n newest-cni-388557: exit status 2 (292.241173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-388557 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-388557 -n newest-cni-388557
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-388557 -n newest-cni-388557
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9rtc" [84318409-0b21-46d9-a02d-2b9814d41adf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003979394s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s92w7" [eb820463-3cb1-4b92-a0ba-c5965e08dd7c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004249485s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9rtc" [84318409-0b21-46d9-a02d-2b9814d41adf] Running
E0917 17:53:31.694383   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/bridge-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004452781s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-004975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s92w7" [eb820463-3cb1-4b92-a0ba-c5965e08dd7c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004596654s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-492823 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-004975 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-004975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975: exit status 2 (307.78386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975: exit status 2 (351.693946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-004975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-004975 -n default-k8s-diff-port-004975
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-492823 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-492823 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-492823 -n no-preload-492823
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-492823 -n no-preload-492823: exit status 2 (356.508417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-492823 -n no-preload-492823
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-492823 -n no-preload-492823: exit status 2 (310.89628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-492823 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-492823 -n no-preload-492823
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-492823 -n no-preload-492823
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jxngc" [7fcd05a8-beba-4b1b-a6fe-2d50d171a497] Running
E0917 17:53:51.483190   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/kubenet-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003441411s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jxngc" [7fcd05a8-beba-4b1b-a6fe-2d50d171a497] Running
E0917 17:53:56.274232   18778 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/false-194686/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003281157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-966407 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-966407 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-966407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-966407 -n embed-certs-966407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-966407 -n embed-certs-966407: exit status 2 (278.419639ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-966407 -n embed-certs-966407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-966407 -n embed-certs-966407: exit status 2 (284.082018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-966407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-966407 -n embed-certs-966407
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-966407 -n embed-certs-966407
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-194686 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-194686" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 17:39:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-959431
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19662-12004/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 17:39:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-docker-542235
contexts:
- context:
cluster: cert-expiration-959431
extensions:
- extension:
last-update: Tue, 17 Sep 2024 17:39:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-959431
name: cert-expiration-959431
- context:
cluster: offline-docker-542235
extensions:
- extension:
last-update: Tue, 17 Sep 2024 17:39:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-542235
name: offline-docker-542235
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-959431
user:
client-certificate: /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/cert-expiration-959431/client.crt
client-key: /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/cert-expiration-959431/client.key
- name: offline-docker-542235
user:
client-certificate: /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/offline-docker-542235/client.crt
client-key: /home/jenkins/minikube-integration/19662-12004/.minikube/profiles/offline-docker-542235/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-194686

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-194686" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194686"

                                                
                                                
----------------------- debugLogs end: cilium-194686 [took: 3.013698443s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-194686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-194686
--- SKIP: TestNetworkPlugins/group/cilium (3.16s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-730434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-730434
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard