Test Report: Docker_Linux 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (1/353)

Order failed test Duration
33 TestAddons/parallel/Registry 72.58
x
+
TestAddons/parallel/Registry (72.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.800684ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-gvcgq" [ea0149ab-7745-43b6-8b62-1ea10549905c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002905347s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-h7plr" [8518e062-22c3-4792-8477-519c3acc1417] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004282922s
addons_test.go:342: (dbg) Run:  kubectl --context addons-062019 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-062019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-062019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.076268272s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-062019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 ip
2024/08/31 22:19:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-062019
helpers_test.go:236: (dbg) docker inspect addons-062019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9",
	        "Created": "2024-08-31T22:06:42.091372817Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:06:42.21626948Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cf9874f1e25d62abde3fdda0022141a8ec82ded75077d073b80dc8f90194cf19",
	        "ResolvConfPath": "/var/lib/docker/containers/b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9/hosts",
	        "LogPath": "/var/lib/docker/containers/b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9/b1368eac4d9703ae3eefa7a8df49c535de73f1e996b32da612752d4e3722f0f9-json.log",
	        "Name": "/addons-062019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-062019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-062019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a965a7239b5e285bfe497839141576890829d12b0e26e6bd3aec85fb718db9b-init/diff:/var/lib/docker/overlay2/994a19aef5443340e3bd712b498efb089b0a5ad479e5bcb4270ba1aa2ef0acce/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a965a7239b5e285bfe497839141576890829d12b0e26e6bd3aec85fb718db9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a965a7239b5e285bfe497839141576890829d12b0e26e6bd3aec85fb718db9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a965a7239b5e285bfe497839141576890829d12b0e26e6bd3aec85fb718db9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-062019",
	                "Source": "/var/lib/docker/volumes/addons-062019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-062019",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-062019",
	                "name.minikube.sigs.k8s.io": "addons-062019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "104c098d4b299e8f185d943daf99841ae9d09d16d7c104d5dc17e9beb2c94efc",
	            "SandboxKey": "/var/run/docker/netns/104c098d4b29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-062019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0a4f5442f1b3bc7380613c4415191fd53ad2c2ecc42ecc17bf46e6f2b684a729",
	                    "EndpointID": "e5dcce394d17cf61c9b4ec8e4d380562b58a2316800109f798d38caf39884747",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-062019",
	                        "b1368eac4d97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-062019 -n addons-062019
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 logs -n 25
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-159852 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | download-docker-159852                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-159852                                                                   | download-docker-159852 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-680673   | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | binary-mirror-680673                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45077                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-680673                                                                     | binary-mirror-680673   | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-062019                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-062019                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-062019 --wait=true                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:10 UTC | 31 Aug 24 22:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | addons-062019                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | addons-062019                                                                               |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | -p addons-062019                                                                            |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-062019 ssh curl -s                                                                   | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-062019 ip                                                                            | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-062019 addons                                                                        | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | -p addons-062019                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-062019 ssh cat                                                                       | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | /opt/local-path-provisioner/pvc-20dcccd8-e7fe-4ed6-82bc-9f7db35d0a45_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-062019 addons                                                                        | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-062019 ip                                                                            | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	| addons  | addons-062019 addons disable                                                                | addons-062019          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:20
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:20.932265   21158 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:20.932502   21158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:20.932510   21158 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:20.932514   21158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:20.932653   21158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:06:20.933236   21158 out.go:352] Setting JSON to false
	I0831 22:06:20.934048   21158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2929,"bootTime":1725139052,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:20.934096   21158 start.go:139] virtualization: kvm guest
	I0831 22:06:20.935893   21158 out.go:177] * [addons-062019] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:20.937083   21158 notify.go:220] Checking for updates...
	I0831 22:06:20.937091   21158 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:20.938457   21158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:20.939687   21158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:06:20.941055   21158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	I0831 22:06:20.942201   21158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:06:20.943316   21158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:20.944596   21158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:20.965108   21158 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:06:20.965269   21158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:21.011268   21158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:06:21.002816619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:06:21.011387   21158 docker.go:307] overlay module found
	I0831 22:06:21.013127   21158 out.go:177] * Using the docker driver based on user configuration
	I0831 22:06:21.014457   21158 start.go:297] selected driver: docker
	I0831 22:06:21.014470   21158 start.go:901] validating driver "docker" against <nil>
	I0831 22:06:21.014483   21158 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:21.015202   21158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:21.056428   21158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:06:21.048526861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:06:21.056621   21158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:21.056879   21158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:21.058532   21158 out.go:177] * Using Docker driver with root privileges
	I0831 22:06:21.059951   21158 cni.go:84] Creating CNI manager for ""
	I0831 22:06:21.059979   21158 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:21.059992   21158 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:21.060056   21158 start.go:340] cluster config:
	{Name:addons-062019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-062019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:21.061362   21158 out.go:177] * Starting "addons-062019" primary control-plane node in "addons-062019" cluster
	I0831 22:06:21.062541   21158 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:06:21.063754   21158 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:06:21.064822   21158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:21.064847   21158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0831 22:06:21.064852   21158 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:21.064905   21158 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:06:21.064923   21158 preload.go:172] Found /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0831 22:06:21.064930   21158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 22:06:21.065259   21158 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/config.json ...
	I0831 22:06:21.065287   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/config.json: {Name:mk020beb400f9a6ff882a668859dab00749de6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:21.079571   21158 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:06:21.079672   21158 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:06:21.079687   21158 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:06:21.079691   21158 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:06:21.079701   21158 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:06:21.079708   21158 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:06:33.001315   21158 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:06:33.001376   21158 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:06:33.001411   21158 start.go:360] acquireMachinesLock for addons-062019: {Name:mk5f9489bbe0a48f304cfe43a8cc77b0ed585225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:33.001506   21158 start.go:364] duration metric: took 72.449µs to acquireMachinesLock for "addons-062019"
	I0831 22:06:33.001529   21158 start.go:93] Provisioning new machine with config: &{Name:addons-062019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-062019 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:06:33.001619   21158 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:06:33.003583   21158 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:06:33.003805   21158 start.go:159] libmachine.API.Create for "addons-062019" (driver="docker")
	I0831 22:06:33.003840   21158 client.go:168] LocalClient.Create starting
	I0831 22:06:33.003926   21158 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem
	I0831 22:06:33.103474   21158 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/cert.pem
	I0831 22:06:33.207476   21158 cli_runner.go:164] Run: docker network inspect addons-062019 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:06:33.222953   21158 cli_runner.go:211] docker network inspect addons-062019 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:06:33.223009   21158 network_create.go:284] running [docker network inspect addons-062019] to gather additional debugging logs...
	I0831 22:06:33.223025   21158 cli_runner.go:164] Run: docker network inspect addons-062019
	W0831 22:06:33.237704   21158 cli_runner.go:211] docker network inspect addons-062019 returned with exit code 1
	I0831 22:06:33.237733   21158 network_create.go:287] error running [docker network inspect addons-062019]: docker network inspect addons-062019: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-062019 not found
	I0831 22:06:33.237744   21158 network_create.go:289] output of [docker network inspect addons-062019]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-062019 not found
	
	** /stderr **
	I0831 22:06:33.237845   21158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:06:33.253137   21158 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b68790}
	I0831 22:06:33.253184   21158 network_create.go:124] attempt to create docker network addons-062019 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:06:33.253253   21158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-062019 addons-062019
	I0831 22:06:33.310307   21158 network_create.go:108] docker network addons-062019 192.168.49.0/24 created
	I0831 22:06:33.310337   21158 kic.go:121] calculated static IP "192.168.49.2" for the "addons-062019" container
	I0831 22:06:33.310397   21158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:06:33.324409   21158 cli_runner.go:164] Run: docker volume create addons-062019 --label name.minikube.sigs.k8s.io=addons-062019 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:06:33.340205   21158 oci.go:103] Successfully created a docker volume addons-062019
	I0831 22:06:33.340264   21158 cli_runner.go:164] Run: docker run --rm --name addons-062019-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-062019 --entrypoint /usr/bin/test -v addons-062019:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:06:38.128372   21158 cli_runner.go:217] Completed: docker run --rm --name addons-062019-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-062019 --entrypoint /usr/bin/test -v addons-062019:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (4.788068984s)
	I0831 22:06:38.128404   21158 oci.go:107] Successfully prepared a docker volume addons-062019
	I0831 22:06:38.128437   21158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:38.128462   21158 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:06:38.128517   21158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-062019:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:06:42.030506   21158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-062019:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.901944159s)
	I0831 22:06:42.030539   21158 kic.go:203] duration metric: took 3.902076658s to extract preloaded images to volume ...
	W0831 22:06:42.030670   21158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:06:42.030757   21158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:06:42.076647   21158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-062019 --name addons-062019 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-062019 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-062019 --network addons-062019 --ip 192.168.49.2 --volume addons-062019:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:06:42.382175   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Running}}
	I0831 22:06:42.400474   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:06:42.417957   21158 cli_runner.go:164] Run: docker exec addons-062019 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:06:42.458719   21158 oci.go:144] the created container "addons-062019" has a running status.
	I0831 22:06:42.458751   21158 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa...
	I0831 22:06:42.618769   21158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:06:42.637822   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:06:42.655879   21158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:06:42.655901   21158 kic_runner.go:114] Args: [docker exec --privileged addons-062019 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:06:42.709487   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:06:42.726580   21158 machine.go:93] provisionDockerMachine start ...
	I0831 22:06:42.726656   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:42.744385   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:42.744577   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:42.744590   21158 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:06:42.745195   21158 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60166->127.0.0.1:32768: read: connection reset by peer
	I0831 22:06:45.868515   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-062019
	
	I0831 22:06:45.868540   21158 ubuntu.go:169] provisioning hostname "addons-062019"
	I0831 22:06:45.868593   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:45.886105   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:45.886308   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:45.886324   21158 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-062019 && echo "addons-062019" | sudo tee /etc/hostname
	I0831 22:06:46.015094   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-062019
	
	I0831 22:06:46.015164   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:46.031474   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:46.031643   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:46.031660   21158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-062019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-062019/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-062019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:06:46.152945   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:06:46.152971   21158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-12963/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-12963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-12963/.minikube}
	I0831 22:06:46.153012   21158 ubuntu.go:177] setting up certificates
	I0831 22:06:46.153030   21158 provision.go:84] configureAuth start
	I0831 22:06:46.153085   21158 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-062019")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-062019
	I0831 22:06:46.169434   21158 provision.go:143] copyHostCerts
	I0831 22:06:46.169506   21158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-12963/.minikube/ca.pem (1078 bytes)
	I0831 22:06:46.169667   21158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-12963/.minikube/cert.pem (1123 bytes)
	I0831 22:06:46.169742   21158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-12963/.minikube/key.pem (1675 bytes)
	I0831 22:06:46.169805   21158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-12963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca-key.pem org=jenkins.addons-062019 san=[127.0.0.1 192.168.49.2 addons-062019 localhost minikube]
	I0831 22:06:46.464398   21158 provision.go:177] copyRemoteCerts
	I0831 22:06:46.464457   21158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:06:46.464491   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:46.480877   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:06:46.569537   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0831 22:06:46.590647   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:06:46.610777   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:06:46.630327   21158 provision.go:87] duration metric: took 477.285853ms to configureAuth
	I0831 22:06:46.630349   21158 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:06:46.630539   21158 config.go:182] Loaded profile config "addons-062019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:06:46.630591   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:46.646598   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:46.646817   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:46.646832   21158 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 22:06:46.765299   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0831 22:06:46.765323   21158 ubuntu.go:71] root file system type: overlay
	I0831 22:06:46.765480   21158 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 22:06:46.765551   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:46.781917   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:46.782106   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:46.782195   21158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 22:06:46.910793   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 22:06:46.910857   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:46.927564   21158 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:46.927736   21158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:46.927753   21158 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 22:06:47.595644   21158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-31 22:06:46.906811619 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0831 22:06:47.595678   21158 machine.go:96] duration metric: took 4.869074359s to provisionDockerMachine
	I0831 22:06:47.595697   21158 client.go:171] duration metric: took 14.591844347s to LocalClient.Create
	I0831 22:06:47.595716   21158 start.go:167] duration metric: took 14.591912067s to libmachine.API.Create "addons-062019"
	I0831 22:06:47.595726   21158 start.go:293] postStartSetup for "addons-062019" (driver="docker")
	I0831 22:06:47.595740   21158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:06:47.595801   21158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:06:47.595834   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:47.611490   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:06:47.697412   21158 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:06:47.700269   21158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:06:47.700314   21158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:06:47.700329   21158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:06:47.700341   21158 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:06:47.700351   21158 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-12963/.minikube/addons for local assets ...
	I0831 22:06:47.700418   21158 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-12963/.minikube/files for local assets ...
	I0831 22:06:47.700447   21158 start.go:296] duration metric: took 104.713035ms for postStartSetup
	I0831 22:06:47.700792   21158 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-062019")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-062019
	I0831 22:06:47.717068   21158 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/config.json ...
	I0831 22:06:47.717380   21158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:06:47.717450   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:47.732662   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:06:47.817366   21158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:06:47.821037   21158 start.go:128] duration metric: took 14.819406527s to createHost
	I0831 22:06:47.821061   21158 start.go:83] releasing machines lock for "addons-062019", held for 14.819542631s
	I0831 22:06:47.821115   21158 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-062019")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-062019
	I0831 22:06:47.837340   21158 ssh_runner.go:195] Run: cat /version.json
	I0831 22:06:47.837397   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:47.837439   21158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:06:47.837491   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:06:47.853421   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:06:47.854148   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:06:47.936267   21158 ssh_runner.go:195] Run: systemctl --version
	I0831 22:06:47.940003   21158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:06:48.007008   21158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0831 22:06:48.028443   21158 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:06:48.028538   21158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:06:48.051715   21158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:06:48.051738   21158 start.go:495] detecting cgroup driver to use...
	I0831 22:06:48.051765   21158 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:48.051867   21158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:48.065304   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 22:06:48.073357   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 22:06:48.081591   21158 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 22:06:48.081666   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 22:06:48.089620   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:48.097770   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 22:06:48.105952   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:48.115098   21158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:06:48.122859   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 22:06:48.131161   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 22:06:48.139244   21158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 22:06:48.147508   21158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:06:48.154516   21158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:06:48.161504   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:48.242455   21158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 22:06:48.326800   21158 start.go:495] detecting cgroup driver to use...
	I0831 22:06:48.326894   21158 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:48.326959   21158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 22:06:48.337566   21158 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0831 22:06:48.337617   21158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 22:06:48.347874   21158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:48.362312   21158 ssh_runner.go:195] Run: which cri-dockerd
	I0831 22:06:48.365484   21158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 22:06:48.373303   21158 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0831 22:06:48.389382   21158 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 22:06:48.485726   21158 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 22:06:48.576939   21158 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 22:06:48.577072   21158 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 22:06:48.593019   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:48.661687   21158 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 22:06:48.903033   21158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 22:06:48.913392   21158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:48.923329   21158 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 22:06:49.001869   21158 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 22:06:49.082283   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:49.158598   21158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 22:06:49.170892   21158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:49.181176   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:49.256974   21158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 22:06:49.317566   21158 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 22:06:49.317649   21158 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 22:06:49.321730   21158 start.go:563] Will wait 60s for crictl version
	I0831 22:06:49.321777   21158 ssh_runner.go:195] Run: which crictl
	I0831 22:06:49.324830   21158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:06:49.356093   21158 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0831 22:06:49.356151   21158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 22:06:49.378832   21158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 22:06:49.404272   21158 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0831 22:06:49.404367   21158 cli_runner.go:164] Run: docker network inspect addons-062019 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:06:49.419653   21158 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:06:49.423069   21158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:06:49.432793   21158 kubeadm.go:883] updating cluster {Name:addons-062019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-062019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:06:49.432892   21158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:49.432930   21158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 22:06:49.449164   21158 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 22:06:49.449184   21158 docker.go:615] Images already preloaded, skipping extraction
	I0831 22:06:49.449255   21158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 22:06:49.466758   21158 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 22:06:49.466780   21158 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:06:49.466796   21158 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0831 22:06:49.466905   21158 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-062019 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-062019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:06:49.466961   21158 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 22:06:49.510565   21158 cni.go:84] Creating CNI manager for ""
	I0831 22:06:49.510588   21158 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:49.510603   21158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:06:49.510622   21158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-062019 NodeName:addons-062019 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:06:49.510751   21158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-062019"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:06:49.510802   21158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:06:49.518632   21158 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:06:49.518683   21158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:06:49.526236   21158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:06:49.542117   21158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:06:49.557326   21158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0831 22:06:49.572426   21158 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:06:49.575463   21158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:06:49.584895   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:49.662934   21158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:06:49.674562   21158 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019 for IP: 192.168.49.2
	I0831 22:06:49.674581   21158 certs.go:194] generating shared ca certs ...
	I0831 22:06:49.674594   21158 certs.go:226] acquiring lock for ca certs: {Name:mka49f0f71efa460cf24344abecafa4bd158c39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:49.674716   21158 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-12963/.minikube/ca.key
	I0831 22:06:49.858121   21158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt ...
	I0831 22:06:49.858148   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt: {Name:mka87a170c1bfe4965fa754a973bfa5343ecee72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:49.858304   21158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-12963/.minikube/ca.key ...
	I0831 22:06:49.858315   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/ca.key: {Name:mk29e1d888f414707ae6c9cb4e8818e3c0073c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:49.858390   21158 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.key
	I0831 22:06:49.993628   21158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.crt ...
	I0831 22:06:49.993654   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.crt: {Name:mk25b4b26eab82e3b1937c06021c43314eb08066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:49.993799   21158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.key ...
	I0831 22:06:49.993809   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.key: {Name:mk2be0b3105ad48653ade09f6163e038b3b53462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:49.993869   21158 certs.go:256] generating profile certs ...
	I0831 22:06:49.993917   21158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.key
	I0831 22:06:49.993930   21158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt with IP's: []
	I0831 22:06:50.185645   21158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt ...
	I0831 22:06:50.185674   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: {Name:mk7b767a2e478062cb88b819d86ce8146cbac5ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.185830   21158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.key ...
	I0831 22:06:50.185841   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.key: {Name:mk8f37aa0e5ec45248e36c9a1b8011bff939d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.185910   21158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key.43b489de
	I0831 22:06:50.185928   21158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt.43b489de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:06:50.404041   21158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt.43b489de ...
	I0831 22:06:50.404068   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt.43b489de: {Name:mke6ca0ba73dc5e08041d3e8199e9e5d1e138d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.404213   21158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key.43b489de ...
	I0831 22:06:50.404225   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key.43b489de: {Name:mk4858d119aee743cea9682612f4dd042d8acaea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.404293   21158 certs.go:381] copying /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt.43b489de -> /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt
	I0831 22:06:50.404375   21158 certs.go:385] copying /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key.43b489de -> /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key
	I0831 22:06:50.404444   21158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.key
	I0831 22:06:50.404462   21158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.crt with IP's: []
	I0831 22:06:50.461604   21158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.crt ...
	I0831 22:06:50.461632   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.crt: {Name:mkdbdd421493a58ef8b1bc8095022c819f9a60c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.461774   21158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.key ...
	I0831 22:06:50.461785   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.key: {Name:mkdf0a24ad5150f9bb86abb801d1f918b7cc3871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:50.461957   21158 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:06:50.461987   21158 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/ca.pem (1078 bytes)
	I0831 22:06:50.462010   21158 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:06:50.462032   21158 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-12963/.minikube/certs/key.pem (1675 bytes)
	I0831 22:06:50.463133   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:06:50.485631   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:06:50.506889   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:06:50.528041   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:06:50.548929   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:06:50.569502   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:06:50.589748   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:06:50.609640   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 22:06:50.629430   21158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:06:50.649555   21158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:06:50.664484   21158 ssh_runner.go:195] Run: openssl version
	I0831 22:06:50.669169   21158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:06:50.677108   21158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:50.679987   21158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:06 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:50.680026   21158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:50.685920   21158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:06:50.693756   21158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:06:50.696493   21158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:06:50.696539   21158 kubeadm.go:392] StartCluster: {Name:addons-062019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-062019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:50.696634   21158 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 22:06:50.712306   21158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:06:50.719876   21158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:06:50.727385   21158 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:06:50.727434   21158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:06:50.734702   21158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:06:50.734720   21158 kubeadm.go:157] found existing configuration files:
	
	I0831 22:06:50.734754   21158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:06:50.741911   21158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:06:50.741959   21158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:06:50.749313   21158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:06:50.756468   21158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:06:50.756513   21158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:06:50.763551   21158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:06:50.770832   21158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:06:50.770871   21158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:06:50.777905   21158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:06:50.785263   21158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:06:50.785318   21158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:06:50.792263   21158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:06:50.826693   21158 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:06:50.826753   21158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:06:50.844196   21158 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:06:50.844314   21158 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0831 22:06:50.844365   21158 kubeadm.go:310] OS: Linux
	I0831 22:06:50.844435   21158 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:06:50.844522   21158 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:06:50.844568   21158 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:06:50.844627   21158 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:06:50.844697   21158 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:06:50.844767   21158 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:06:50.844838   21158 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:06:50.844912   21158 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:06:50.844989   21158 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:06:50.891952   21158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:06:50.892067   21158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:06:50.892218   21158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:06:50.901281   21158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:06:50.904728   21158 out.go:235]   - Generating certificates and keys ...
	I0831 22:06:50.904804   21158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:06:50.904870   21158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:06:51.062646   21158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:06:51.240350   21158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:06:51.321039   21158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:06:51.581954   21158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:06:51.705401   21158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:06:51.705537   21158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-062019 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:06:51.877145   21158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:06:51.877326   21158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-062019 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:06:51.963476   21158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:06:52.073364   21158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:06:52.143035   21158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:06:52.143117   21158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:06:52.263852   21158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:06:52.597162   21158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:06:52.786652   21158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:06:52.951680   21158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:06:53.074088   21158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:06:53.074591   21158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:06:53.076916   21158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:06:53.078928   21158 out.go:235]   - Booting up control plane ...
	I0831 22:06:53.079019   21158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:06:53.079099   21158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:06:53.079177   21158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:06:53.087755   21158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:06:53.092907   21158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:06:53.092974   21158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:06:53.174135   21158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:06:53.174283   21158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:06:54.175556   21158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001429966s
	I0831 22:06:54.175681   21158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:06:58.176563   21158 kubeadm.go:310] [api-check] The API server is healthy after 4.001025273s
	I0831 22:06:58.187563   21158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:06:58.195627   21158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:06:58.211130   21158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:06:58.211354   21158 kubeadm.go:310] [mark-control-plane] Marking the node addons-062019 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:06:58.217607   21158 kubeadm.go:310] [bootstrap-token] Using token: 0qpgr5.n8j8v2xl9llefuev
	I0831 22:06:58.219121   21158 out.go:235]   - Configuring RBAC rules ...
	I0831 22:06:58.219221   21158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:06:58.221586   21158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:06:58.226244   21158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:06:58.229237   21158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:06:58.231248   21158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:06:58.233440   21158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:06:58.581913   21158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:06:59.050988   21158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:06:59.581834   21158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:06:59.582718   21158 kubeadm.go:310] 
	I0831 22:06:59.582823   21158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:06:59.582841   21158 kubeadm.go:310] 
	I0831 22:06:59.582952   21158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:06:59.582961   21158 kubeadm.go:310] 
	I0831 22:06:59.582994   21158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:06:59.583093   21158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:06:59.583155   21158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:06:59.583162   21158 kubeadm.go:310] 
	I0831 22:06:59.583223   21158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:06:59.583230   21158 kubeadm.go:310] 
	I0831 22:06:59.583293   21158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:06:59.583306   21158 kubeadm.go:310] 
	I0831 22:06:59.583373   21158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:06:59.583463   21158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:06:59.583523   21158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:06:59.583529   21158 kubeadm.go:310] 
	I0831 22:06:59.583656   21158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:06:59.583763   21158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:06:59.583771   21158 kubeadm.go:310] 
	I0831 22:06:59.583886   21158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0qpgr5.n8j8v2xl9llefuev \
	I0831 22:06:59.584025   21158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9acc5889cd80bc3b1801de920a50c1dd3e95945a571b5eb74b0c69b12eb7097d \
	I0831 22:06:59.584064   21158 kubeadm.go:310] 	--control-plane 
	I0831 22:06:59.584075   21158 kubeadm.go:310] 
	I0831 22:06:59.584163   21158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:06:59.584169   21158 kubeadm.go:310] 
	I0831 22:06:59.584265   21158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0qpgr5.n8j8v2xl9llefuev \
	I0831 22:06:59.584427   21158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9acc5889cd80bc3b1801de920a50c1dd3e95945a571b5eb74b0c69b12eb7097d 
	I0831 22:06:59.586129   21158 kubeadm.go:310] W0831 22:06:50.823986    1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:59.586389   21158 kubeadm.go:310] W0831 22:06:50.824743    1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:59.586612   21158 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0831 22:06:59.586710   21158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:06:59.586721   21158 cni.go:84] Creating CNI manager for ""
	I0831 22:06:59.586749   21158 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:59.588359   21158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:06:59.589605   21158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:06:59.597454   21158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:06:59.612904   21158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:06:59.612984   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:59.613002   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-062019 minikube.k8s.io/updated_at=2024_08_31T22_06_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-062019 minikube.k8s.io/primary=true
	I0831 22:06:59.676233   21158 ops.go:34] apiserver oom_adj: -16
	I0831 22:06:59.676249   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:00.176894   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:00.676262   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:01.177245   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:01.677135   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:02.176251   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:02.676900   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:03.176946   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:03.676610   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:04.176346   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:04.676638   21158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:04.758517   21158 kubeadm.go:1113] duration metric: took 5.14559018s to wait for elevateKubeSystemPrivileges
	I0831 22:07:04.758555   21158 kubeadm.go:394] duration metric: took 14.062019898s to StartCluster
	I0831 22:07:04.758578   21158 settings.go:142] acquiring lock: {Name:mkd64713877abfa1f88ebaf8b2c2d6cceb5d6797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:04.758690   21158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:07:04.759013   21158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/kubeconfig: {Name:mkafef424239d09e3dbf07be45599ec2de072632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:04.759176   21158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:07:04.759189   21158 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:07:04.759252   21158 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:07:04.759356   21158 config.go:182] Loaded profile config "addons-062019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:07:04.759371   21158 addons.go:69] Setting metrics-server=true in profile "addons-062019"
	I0831 22:07:04.759414   21158 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-062019"
	I0831 22:07:04.759431   21158 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-062019"
	I0831 22:07:04.759422   21158 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-062019"
	I0831 22:07:04.759449   21158 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-062019"
	I0831 22:07:04.759458   21158 addons.go:234] Setting addon metrics-server=true in "addons-062019"
	I0831 22:07:04.759459   21158 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-062019"
	I0831 22:07:04.759484   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759488   21158 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-062019"
	I0831 22:07:04.759493   21158 addons.go:69] Setting volumesnapshots=true in profile "addons-062019"
	I0831 22:07:04.759418   21158 addons.go:69] Setting registry=true in profile "addons-062019"
	I0831 22:07:04.759517   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759378   21158 addons.go:69] Setting gcp-auth=true in profile "addons-062019"
	I0831 22:07:04.759536   21158 mustload.go:65] Loading cluster: addons-062019
	I0831 22:07:04.759389   21158 addons.go:69] Setting helm-tiller=true in profile "addons-062019"
	I0831 22:07:04.759642   21158 addons.go:234] Setting addon helm-tiller=true in "addons-062019"
	I0831 22:07:04.759686   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759484   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759815   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759969   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759983   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.760082   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.760098   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759536   21158 addons.go:234] Setting addon registry=true in "addons-062019"
	I0831 22:07:04.760176   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759371   21158 addons.go:69] Setting inspektor-gadget=true in profile "addons-062019"
	I0831 22:07:04.760305   21158 addons.go:234] Setting addon inspektor-gadget=true in "addons-062019"
	I0831 22:07:04.760329   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759689   21158 config.go:182] Loaded profile config "addons-062019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:07:04.760611   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.760686   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.760737   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759358   21158 addons.go:69] Setting yakd=true in profile "addons-062019"
	I0831 22:07:04.761173   21158 addons.go:234] Setting addon yakd=true in "addons-062019"
	I0831 22:07:04.759398   21158 addons.go:69] Setting ingress=true in profile "addons-062019"
	I0831 22:07:04.761241   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.761249   21158 addons.go:234] Setting addon ingress=true in "addons-062019"
	I0831 22:07:04.761285   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.761689   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.761794   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759406   21158 addons.go:69] Setting ingress-dns=true in profile "addons-062019"
	I0831 22:07:04.761981   21158 addons.go:234] Setting addon ingress-dns=true in "addons-062019"
	I0831 22:07:04.762008   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759420   21158 addons.go:69] Setting volcano=true in profile "addons-062019"
	I0831 22:07:04.762287   21158 addons.go:234] Setting addon volcano=true in "addons-062019"
	I0831 22:07:04.762334   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.759396   21158 addons.go:69] Setting default-storageclass=true in profile "addons-062019"
	I0831 22:07:04.763687   21158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-062019"
	I0831 22:07:04.763862   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.765627   21158 out.go:177] * Verifying Kubernetes components...
	I0831 22:07:04.759405   21158 addons.go:69] Setting cloud-spanner=true in profile "addons-062019"
	I0831 22:07:04.766040   21158 addons.go:234] Setting addon cloud-spanner=true in "addons-062019"
	I0831 22:07:04.766108   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.766559   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.759406   21158 addons.go:69] Setting storage-provisioner=true in profile "addons-062019"
	I0831 22:07:04.767901   21158 addons.go:234] Setting addon storage-provisioner=true in "addons-062019"
	I0831 22:07:04.767941   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.768406   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.768606   21158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:04.759512   21158 addons.go:234] Setting addon volumesnapshots=true in "addons-062019"
	I0831 22:07:04.768912   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.789884   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.789906   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.791452   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.805846   21158 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0831 22:07:04.807141   21158 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:07:04.807165   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0831 22:07:04.807394   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.825732   21158 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:07:04.827056   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:07:04.827079   21158 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:07:04.827132   21158 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:07:04.829474   21158 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:07:04.829623   21158 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:04.829643   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:07:04.829699   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.829494   21158 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:07:04.829925   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.827213   21158 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:04.830806   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:07:04.830851   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.834097   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:07:04.834523   21158 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:07:04.836001   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:07:04.837147   21158 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:07:04.837323   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:07:04.838919   21158 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:07:04.838937   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:07:04.838968   21158 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-062019"
	I0831 22:07:04.838992   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.838994   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.839167   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:07:04.839284   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.840954   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.841343   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:07:04.842623   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:07:04.846435   21158 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0831 22:07:04.847625   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:07:04.847677   21158 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:07:04.850832   21158 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:07:04.850853   21158 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:07:04.851407   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.852378   21158 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0831 22:07:04.851453   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:07:04.852503   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:07:04.852755   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.855310   21158 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:07:04.856733   21158 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:04.856749   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:07:04.856793   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.857317   21158 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0831 22:07:04.858478   21158 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:07:04.859644   21158 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:07:04.859650   21158 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:07:04.859660   21158 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:07:04.859683   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0831 22:07:04.859732   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.859757   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.872687   21158 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:07:04.873948   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:07:04.873976   21158 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:07:04.874065   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.881696   21158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:07:04.883232   21158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:04.883253   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:07:04.883307   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.885719   21158 addons.go:234] Setting addon default-storageclass=true in "addons-062019"
	I0831 22:07:04.885761   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:04.886218   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:04.892925   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.894528   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.896329   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.909341   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.910284   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.919794   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.922937   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.923220   21158 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:07:04.924696   21158 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:07:04.926102   21158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:04.926123   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:07:04.926175   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.927911   21158 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:07:04.929315   21158 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:04.929549   21158 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:04.929569   21158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:07:04.929617   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.931053   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.931497   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.932185   21158 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:04.932841   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.933511   21158 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:04.933528   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:07:04.933576   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:04.933792   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.940241   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.947732   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:04.949921   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	W0831 22:07:04.955818   21158 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0831 22:07:04.955847   21158 retry.go:31] will retry after 288.400573ms: ssh: handshake failed: EOF
	I0831 22:07:04.975958   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	W0831 22:07:04.978941   21158 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0831 22:07:04.978964   21158 retry.go:31] will retry after 250.515056ms: ssh: handshake failed: EOF
	I0831 22:07:05.168356   21158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:05.168670   21158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:07:05.253159   21158 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:07:05.253188   21158 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:07:05.259610   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:05.260596   21158 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:07:05.260614   21158 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0831 22:07:05.352096   21158 node_ready.go:35] waiting up to 6m0s for node "addons-062019" to be "Ready" ...
	I0831 22:07:05.355383   21158 node_ready.go:49] node "addons-062019" has status "Ready":"True"
	I0831 22:07:05.355414   21158 node_ready.go:38] duration metric: took 3.211841ms for node "addons-062019" to be "Ready" ...
	I0831 22:07:05.355428   21158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:05.364152   21158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4lbvv" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:05.366197   21158 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:07:05.366273   21158 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:07:05.451336   21158 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:07:05.451436   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:07:05.451964   21158 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:07:05.452031   21158 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:07:05.552399   21158 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:07:05.552456   21158 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:07:05.552942   21158 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:07:05.552976   21158 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:07:05.563641   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:07:05.563675   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:07:05.563713   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:05.564699   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:05.656672   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:05.657864   21158 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:07:05.657886   21158 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:07:05.666298   21158 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:05.666324   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:07:05.670774   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:05.670937   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:07:05.751971   21158 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:05.752003   21158 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0831 22:07:05.753897   21158 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:07:05.753917   21158 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:07:05.850625   21158 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:07:05.850674   21158 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:07:05.866195   21158 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:07:05.866226   21158 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:07:05.868623   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:07:05.868658   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:07:05.965147   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:06.051466   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:06.151926   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:06.160574   21158 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:07:06.160601   21158 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:07:06.162433   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:06.162795   21158 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:07:06.162853   21158 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:07:06.171624   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:07:06.171668   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:07:06.172007   21158 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:06.172174   21158 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:07:06.262428   21158 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:07:06.262504   21158 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:07:06.457895   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:07:06.457924   21158 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:07:06.467445   21158 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:07:06.467472   21158 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:07:06.652754   21158 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.484049947s)
	I0831 22:07:06.652850   21158 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:07:06.654103   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.394457008s)
	I0831 22:07:06.767282   21158 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:06.767306   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:07:06.856069   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:07:06.856101   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:07:06.958476   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:06.959521   21158 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:06.959582   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:07:07.154168   21158 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:07:07.154197   21158 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:07:07.156317   21158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-062019" context rescaled to 1 replicas
	I0831 22:07:07.170560   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:07.268417   21158 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:07:07.268447   21158 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:07:07.454123   21158 pod_ready.go:103] pod "coredns-6f6b679f8f-4lbvv" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:07.459403   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:07.758792   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:07:07.758822   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:07:07.870115   21158 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:07.870189   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:07:08.257924   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:07:08.258004   21158 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:07:08.452864   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.889118491s)
	I0831 22:07:08.454901   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:08.953894   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.389156443s)
	I0831 22:07:08.953962   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.297231554s)
	I0831 22:07:08.954058   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.283249044s)
	I0831 22:07:08.958030   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:07:08.958051   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:07:09.462218   21158 pod_ready.go:103] pod "coredns-6f6b679f8f-4lbvv" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:09.754190   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:07:09.754269   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:07:09.854663   21158 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:09.854742   21158 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:07:09.872552   21158 pod_ready.go:93] pod "coredns-6f6b679f8f-4lbvv" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:09.872592   21158 pod_ready.go:82] duration metric: took 4.50834949s for pod "coredns-6f6b679f8f-4lbvv" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:09.872606   21158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:10.755803   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:11.856991   21158 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:11.857100   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:11.881353   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:11.958423   21158 pod_ready.go:103] pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:12.763479   21158 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:12.964905   21158 addons.go:234] Setting addon gcp-auth=true in "addons-062019"
	I0831 22:07:12.964964   21158 host.go:66] Checking if "addons-062019" exists ...
	I0831 22:07:12.965500   21158 cli_runner.go:164] Run: docker container inspect addons-062019 --format={{.State.Status}}
	I0831 22:07:12.984331   21158 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:12.984378   21158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-062019
	I0831 22:07:13.000179   21158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/addons-062019/id_rsa Username:docker}
	I0831 22:07:14.460941   21158 pod_ready.go:103] pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:15.456267   21158 pod_ready.go:98] pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-31 22:07:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-31 22:07:07 +0000 UTC,FinishedAt:2024-08-31 22:07:14 +0000 UTC,ContainerID:docker://60e735ffc1a476639bb8fd46831afde97e4946012a5bff9b66ab31ea2a1fbad1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://60e735ffc1a476639bb8fd46831afde97e4946012a5bff9b66ab31ea2a1fbad1 Started:0xc001a0b9f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a09f10} {Name:kube-api-access-b46hb MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc001a09f20}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0831 22:07:15.456362   21158 pod_ready.go:82] duration metric: took 5.583745603s for pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace to be "Ready" ...
	E0831 22:07:15.456385   21158 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-rtdbs" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-31 22:07:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-31 22:07:07 +0000 UTC,FinishedAt:2024-08-31 22:07:14 +0000 UTC,ContainerID:docker://60e735ffc1a476639bb8fd46831afde97e4946012a5bff9b66ab31ea2a1fbad1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://60e735ffc1a476639bb8fd46831afde97e4946012a5bff9b66ab31ea2a1fbad1 Started:0xc001a0b9f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a09f10} {Name:kube-api-access-b46hb MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a09f20}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0831 22:07:15.456395   21158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.461308   21158 pod_ready.go:93] pod "etcd-addons-062019" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:15.461380   21158 pod_ready.go:82] duration metric: took 4.975442ms for pod "etcd-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.461418   21158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.470498   21158 pod_ready.go:93] pod "kube-apiserver-addons-062019" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:15.470532   21158 pod_ready.go:82] duration metric: took 9.080704ms for pod "kube-apiserver-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.470546   21158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.556051   21158 pod_ready.go:93] pod "kube-controller-manager-addons-062019" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:15.556133   21158 pod_ready.go:82] duration metric: took 85.578381ms for pod "kube-controller-manager-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.556159   21158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkhrj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.565150   21158 pod_ready.go:93] pod "kube-proxy-fkhrj" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:15.565248   21158 pod_ready.go:82] duration metric: took 9.069826ms for pod "kube-proxy-fkhrj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.565278   21158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.851091   21158 pod_ready.go:93] pod "kube-scheduler-addons-062019" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:15.851186   21158 pod_ready.go:82] duration metric: took 285.888767ms for pod "kube-scheduler-addons-062019" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:15.851210   21158 pod_ready.go:39] duration metric: took 10.495766014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:15.851265   21158 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:07:15.851359   21158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:07:16.870235   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.199258544s)
	I0831 22:07:16.870422   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.818863075s)
	I0831 22:07:16.870969   21158 addons.go:475] Verifying addon registry=true in "addons-062019"
	I0831 22:07:16.870460   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.718456615s)
	I0831 22:07:16.870557   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.708052587s)
	I0831 22:07:16.871103   21158 addons.go:475] Verifying addon ingress=true in "addons-062019"
	I0831 22:07:16.870620   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.912056648s)
	I0831 22:07:16.871254   21158 addons.go:475] Verifying addon metrics-server=true in "addons-062019"
	I0831 22:07:16.870730   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.700065711s)
	I0831 22:07:16.870834   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.415904535s)
	I0831 22:07:16.870864   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.905144853s)
	W0831 22:07:16.871301   21158 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:16.871402   21158 retry.go:31] will retry after 350.308725ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:16.870770   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.411289432s)
	I0831 22:07:16.951218   21158 out.go:177] * Verifying ingress addon...
	I0831 22:07:16.951222   21158 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-062019 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:07:16.951275   21158 out.go:177] * Verifying registry addon...
	I0831 22:07:16.953635   21158 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:07:16.953634   21158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:07:16.957264   21158 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:07:16.957286   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:16.958515   21158 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:07:16.958533   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:17.222138   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:17.463674   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.464712   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:17.878335   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.12240761s)
	I0831 22:07:17.878380   21158 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-062019"
	I0831 22:07:17.878460   21158 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.027065592s)
	I0831 22:07:17.878490   21158 api_server.go:72] duration metric: took 13.119278265s to wait for apiserver process to appear ...
	I0831 22:07:17.878498   21158 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:07:17.878444   21158 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.894066615s)
	I0831 22:07:17.878520   21158 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:07:17.879866   21158 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:17.879935   21158 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:17.883365   21158 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:17.884009   21158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:17.884455   21158 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:17.884470   21158 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:17.950362   21158 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:07:17.951502   21158 api_server.go:141] control plane version: v1.31.0
	I0831 22:07:17.951529   21158 api_server.go:131] duration metric: took 73.024821ms to wait for apiserver health ...
	I0831 22:07:17.951539   21158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:07:17.956822   21158 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:17.956848   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:17.963113   21158 system_pods.go:59] 18 kube-system pods found
	I0831 22:07:17.963152   21158 system_pods.go:61] "coredns-6f6b679f8f-4lbvv" [b9733242-1108-4dbe-a7da-9da5a0d1dfbb] Running
	I0831 22:07:17.963165   21158 system_pods.go:61] "csi-hostpath-attacher-0" [0e809467-f79b-4cf4-a160-5c11d2658d13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:17.963172   21158 system_pods.go:61] "csi-hostpath-resizer-0" [b5320479-86c8-454b-83f6-9be42e45358e] Pending
	I0831 22:07:17.963184   21158 system_pods.go:61] "csi-hostpathplugin-rf7ph" [0fd767b3-9371-481d-81de-b1f6028ffabb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:17.963195   21158 system_pods.go:61] "etcd-addons-062019" [ad613698-243d-4b5b-8b88-4db3b1c8efb7] Running
	I0831 22:07:17.963202   21158 system_pods.go:61] "kube-apiserver-addons-062019" [fd85b0bc-6e03-4696-987b-f497b07378f8] Running
	I0831 22:07:17.963208   21158 system_pods.go:61] "kube-controller-manager-addons-062019" [b468ff23-0947-4659-b605-85efc2c04798] Running
	I0831 22:07:17.963216   21158 system_pods.go:61] "kube-ingress-dns-minikube" [e3a9d474-2b73-41c4-8ff2-f54d53126f65] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0831 22:07:17.963228   21158 system_pods.go:61] "kube-proxy-fkhrj" [5285b32f-ebf0-436b-8196-ef1eb5ba3ef9] Running
	I0831 22:07:17.963234   21158 system_pods.go:61] "kube-scheduler-addons-062019" [79b6ac55-0ed7-4aea-a71d-3b48c2106092] Running
	I0831 22:07:17.963242   21158 system_pods.go:61] "metrics-server-84c5f94fbc-f95vb" [f6b764bb-e039-4baa-bc7e-1cbb1dfa8c04] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 22:07:17.963255   21158 system_pods.go:61] "nvidia-device-plugin-daemonset-cvd8z" [d6872087-20a9-403a-8d49-aaa43c16db51] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 22:07:17.963267   21158 system_pods.go:61] "registry-6fb4cdfc84-gvcgq" [ea0149ab-7745-43b6-8b62-1ea10549905c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:07:17.963276   21158 system_pods.go:61] "registry-proxy-h7plr" [8518e062-22c3-4792-8477-519c3acc1417] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:07:17.963288   21158 system_pods.go:61] "snapshot-controller-56fcc65765-bd79z" [62aa7a7e-5840-42e2-b0fd-52803a4b7a9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:17.963306   21158 system_pods.go:61] "snapshot-controller-56fcc65765-qhqhv" [45356d72-8fa6-4870-a656-9f0eaedd62a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:17.963315   21158 system_pods.go:61] "storage-provisioner" [1fc3cf5f-9344-4f82-9f79-05cc4049aaf9] Running
	I0831 22:07:17.963323   21158 system_pods.go:61] "tiller-deploy-b48cc5f79-89fbx" [7c41af58-b085-4eb2-97a3-5ec58bd639c0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0831 22:07:17.963332   21158 system_pods.go:74] duration metric: took 11.785728ms to wait for pod list to return data ...
	I0831 22:07:17.963346   21158 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:07:17.967576   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.968378   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:17.969121   21158 default_sa.go:45] found service account: "default"
	I0831 22:07:17.969143   21158 default_sa.go:55] duration metric: took 5.786689ms for default service account to be created ...
	I0831 22:07:17.969155   21158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:07:17.977573   21158 system_pods.go:86] 18 kube-system pods found
	I0831 22:07:17.977599   21158 system_pods.go:89] "coredns-6f6b679f8f-4lbvv" [b9733242-1108-4dbe-a7da-9da5a0d1dfbb] Running
	I0831 22:07:17.977611   21158 system_pods.go:89] "csi-hostpath-attacher-0" [0e809467-f79b-4cf4-a160-5c11d2658d13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:17.977620   21158 system_pods.go:89] "csi-hostpath-resizer-0" [b5320479-86c8-454b-83f6-9be42e45358e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:07:17.977635   21158 system_pods.go:89] "csi-hostpathplugin-rf7ph" [0fd767b3-9371-481d-81de-b1f6028ffabb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:17.977647   21158 system_pods.go:89] "etcd-addons-062019" [ad613698-243d-4b5b-8b88-4db3b1c8efb7] Running
	I0831 22:07:17.977653   21158 system_pods.go:89] "kube-apiserver-addons-062019" [fd85b0bc-6e03-4696-987b-f497b07378f8] Running
	I0831 22:07:17.977659   21158 system_pods.go:89] "kube-controller-manager-addons-062019" [b468ff23-0947-4659-b605-85efc2c04798] Running
	I0831 22:07:17.977668   21158 system_pods.go:89] "kube-ingress-dns-minikube" [e3a9d474-2b73-41c4-8ff2-f54d53126f65] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0831 22:07:17.977676   21158 system_pods.go:89] "kube-proxy-fkhrj" [5285b32f-ebf0-436b-8196-ef1eb5ba3ef9] Running
	I0831 22:07:17.977683   21158 system_pods.go:89] "kube-scheduler-addons-062019" [79b6ac55-0ed7-4aea-a71d-3b48c2106092] Running
	I0831 22:07:17.977694   21158 system_pods.go:89] "metrics-server-84c5f94fbc-f95vb" [f6b764bb-e039-4baa-bc7e-1cbb1dfa8c04] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 22:07:17.977705   21158 system_pods.go:89] "nvidia-device-plugin-daemonset-cvd8z" [d6872087-20a9-403a-8d49-aaa43c16db51] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 22:07:17.977714   21158 system_pods.go:89] "registry-6fb4cdfc84-gvcgq" [ea0149ab-7745-43b6-8b62-1ea10549905c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:07:17.977727   21158 system_pods.go:89] "registry-proxy-h7plr" [8518e062-22c3-4792-8477-519c3acc1417] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:07:17.977736   21158 system_pods.go:89] "snapshot-controller-56fcc65765-bd79z" [62aa7a7e-5840-42e2-b0fd-52803a4b7a9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:17.977746   21158 system_pods.go:89] "snapshot-controller-56fcc65765-qhqhv" [45356d72-8fa6-4870-a656-9f0eaedd62a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:17.977758   21158 system_pods.go:89] "storage-provisioner" [1fc3cf5f-9344-4f82-9f79-05cc4049aaf9] Running
	I0831 22:07:17.977766   21158 system_pods.go:89] "tiller-deploy-b48cc5f79-89fbx" [7c41af58-b085-4eb2-97a3-5ec58bd639c0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0831 22:07:17.977774   21158 system_pods.go:126] duration metric: took 8.612365ms to wait for k8s-apps to be running ...
	I0831 22:07:17.977786   21158 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:07:17.977833   21158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:07:17.980142   21158 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:17.980161   21158 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:18.067439   21158 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:18.067459   21158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:18.087800   21158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:18.454235   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.457761   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.457928   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:18.954750   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.959699   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.960658   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:19.452220   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.457877   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:19.458864   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:19.564281   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.342098058s)
	I0831 22:07:19.564321   21158 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.586444243s)
	I0831 22:07:19.564348   21158 system_svc.go:56] duration metric: took 1.586558784s WaitForService to wait for kubelet
	I0831 22:07:19.564387   21158 kubeadm.go:582] duration metric: took 14.805173221s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:07:19.564429   21158 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:07:19.566982   21158 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0831 22:07:19.567013   21158 node_conditions.go:123] node cpu capacity is 8
	I0831 22:07:19.567048   21158 node_conditions.go:105] duration metric: took 2.611537ms to run NodePressure ...
	I0831 22:07:19.567062   21158 start.go:241] waiting for startup goroutines ...
	I0831 22:07:19.755158   21158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.667314473s)
	I0831 22:07:19.757301   21158 addons.go:475] Verifying addon gcp-auth=true in "addons-062019"
	I0831 22:07:19.758877   21158 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:19.761019   21158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:19.763326   21158 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:19.888915   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.957690   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:19.958095   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:20.389121   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.457128   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:20.457644   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:20.888410   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.957911   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:20.958816   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:21.388327   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.457294   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:21.457802   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:21.888811   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.958623   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:21.958993   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:22.388978   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.457414   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:22.457698   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:22.889569   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.957529   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:22.957966   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:23.388400   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.457474   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:23.457940   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:23.888935   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.988898   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:23.989151   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:24.388479   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.457583   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:24.457718   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:24.888366   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.957515   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:24.958123   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:25.388818   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.456655   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:25.456833   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:25.887866   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.957001   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:25.957479   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:26.387365   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.458035   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:26.458236   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:26.888234   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.956989   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:26.957603   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:27.388208   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.457097   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:27.457587   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:27.888365   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.957564   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:27.957951   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:28.388263   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.457118   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:28.457491   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:28.887604   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.957040   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:28.957266   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:29.388377   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.457389   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:29.457634   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:29.887813   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.957043   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:29.957350   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:30.392058   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.457436   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:30.457793   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:30.888282   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.956637   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:30.957182   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:31.388571   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.457854   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:31.458898   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:31.888736   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.957989   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:31.958201   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.388333   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.457530   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.457918   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.888494   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.957631   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.957871   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.388754   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.456594   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.456787   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.888067   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.957076   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.957388   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.387558   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.456586   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.457058   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.887856   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.956642   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.957065   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.387794   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.456884   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.457071   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.888433   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.956482   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.956727   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.388416   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.457149   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.457754   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.887768   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.956919   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.957138   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.388737   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.456690   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.456947   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.888748   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.957291   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.957431   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.387834   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.457436   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.457868   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.887672   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.956743   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.957121   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.388217   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.457116   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.457493   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.887633   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.956739   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.956981   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.388511   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.457630   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.457821   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.888364   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.957570   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.957723   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.388532   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.457381   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.457613   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.888906   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.957253   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.957511   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.388473   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.457735   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.458202   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.887619   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.977181   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.977653   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.387616   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.457515   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.458144   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.888610   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.957267   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.957549   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.388402   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.457385   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.457942   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.888527   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.956606   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.956736   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.388628   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.457164   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.457477   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.887990   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.957462   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.957605   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.388565   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.457611   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.457942   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.887563   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.957815   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.958081   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.388377   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.457750   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.457916   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.888420   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.958219   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.958413   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.388372   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.457400   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.457858   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.888803   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.957744   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.958031   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.388087   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.457668   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.457881   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.888127   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.957492   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.957764   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.388434   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.457542   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.457916   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.888148   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.957354   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.957465   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.387387   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.457761   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.458039   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.888931   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.957931   21158 kapi.go:107] duration metric: took 35.004299167s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:07:51.958628   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.388530   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.456923   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.888807   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.958501   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.390521   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.457466   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.888230   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.957071   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.387612   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.456993   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.887997   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.957063   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.387926   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.457436   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.888181   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.957711   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.387930   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.457803   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.888674   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.958069   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.388637   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.457915   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.888270   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.957916   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.388187   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.457713   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.887866   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.957379   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.388878   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.457496   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.888625   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.957855   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.389072   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.457353   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.888653   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.957702   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.388600   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.458099   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.887721   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.957344   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.388805   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.457813   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.888984   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.957467   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.390083   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.489396   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.888128   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.957474   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.387728   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.457326   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.887475   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.957882   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.388997   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.458017   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.889224   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.957609   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.387512   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.456841   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.888653   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.958301   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.388624   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.458377   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.887686   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.957390   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.389159   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.458259   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.887828   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.957833   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.388845   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.457588   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.888400   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.957812   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.388871   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.457584   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.952737   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.959449   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.388652   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.457154   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.888448   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.957664   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.388859   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.457285   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.888834   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.958354   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.388689   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.456953   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.888484   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.958166   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.389089   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.457410   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.888189   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.958084   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.387828   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.457300   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.888494   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.958335   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.388459   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.458176   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.888295   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.957946   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.389070   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.456879   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.888397   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.957554   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.388206   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.457858   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.888906   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.958269   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.454007   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.458241   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.888462   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.958340   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.388751   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.488072   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.888967   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.958284   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.452871   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.459024   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.888750   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.958471   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.387802   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.457874   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.888211   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.957792   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.388191   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.457582   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.888965   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.957445   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.387737   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.457004   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.888801   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.957041   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.388861   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.457997   21158 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.916411   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.018723   21158 kapi.go:107] duration metric: took 1m9.065085269s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:08:26.397171   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.887482   21158 kapi.go:107] duration metric: took 1m9.003471513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:08:42.264190   21158 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:08:42.264211   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.764126   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.263824   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.765043   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.263632   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.764713   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.263615   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.764505   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.264343   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.764220   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.264062   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.764125   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.264272   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.764077   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.264344   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.764344   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.263869   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.764641   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.264272   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.764540   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.265039   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.763778   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.264535   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.764593   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.264294   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.764120   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.264373   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.764216   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.264258   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.763985   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.263778   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.764818   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.265043   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.763875   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.263654   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.764732   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.264169   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.764015   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.264385   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.764686   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.264420   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.764141   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.263896   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.764880   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.264864   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.765153   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.263857   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.765285   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.264008   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.763926   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.263744   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.764770   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.264904   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.764390   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.263967   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.764023   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.263774   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.764682   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.263749   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.764975   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.263851   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.764400   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.264043   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.764191   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.264041   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.764152   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.263803   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.764910   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.263966   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.763826   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.264579   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.764742   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.264883   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.764988   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.266257   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.764122   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.264102   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.764117   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.264339   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.764259   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.264107   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.764634   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.264611   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.764555   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.264427   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.764172   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.263908   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.763921   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.263891   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.764157   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.263993   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.764030   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.264238   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.764505   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.264603   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.764579   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.264584   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.764308   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.264365   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.764595   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.264743   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.764390   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.263816   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.765090   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.264063   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.764057   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.263792   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.765037   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.264053   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.763771   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.264557   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.764412   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.264445   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.764566   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.264717   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.764799   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.264749   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.764760   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.263911   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.764000   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.264350   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.764066   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.263732   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.765165   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.263845   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.765159   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.263758   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.764982   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.264627   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.764723   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.264620   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.765405   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.264554   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.764572   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.265163   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.764036   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.264767   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.764844   21158 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.264476   21158 kapi.go:107] duration metric: took 2m31.503451532s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:09:51.266205   21158 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-062019 cluster.
	I0831 22:09:51.267541   21158 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:09:51.268907   21158 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:09:51.270229   21158 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, default-storageclass, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0831 22:09:51.271374   21158 addons.go:510] duration metric: took 2m46.512120707s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner default-storageclass volcano helm-tiller metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0831 22:09:51.271417   21158 start.go:246] waiting for cluster config update ...
	I0831 22:09:51.271443   21158 start.go:255] writing updated cluster config ...
	I0831 22:09:51.271685   21158 ssh_runner.go:195] Run: rm -f paused
	I0831 22:09:51.320621   21158 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:09:51.322214   21158 out.go:177] * Done! kubectl is now configured to use "addons-062019" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 31 22:19:26 addons-062019 dockerd[1339]: time="2024-08-31T22:19:26.994785502Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:19:26 addons-062019 dockerd[1339]: time="2024-08-31T22:19:26.996835999Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:19:33 addons-062019 cri-dockerd[1604]: time="2024-08-31T22:19:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83bc162f801b094f1d9307bec1090e13d2f1a04b398c8fa020403f078f8e6bb8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 31 22:19:34 addons-062019 cri-dockerd[1604]: time="2024-08-31T22:19:34Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 31 22:19:39 addons-062019 dockerd[1339]: time="2024-08-31T22:19:39.856643117Z" level=info msg="ignoring event" container=524268bf8e8f75ed6e38d4e7f4ae22c5e51abf2ca5dd0d595d3d71bfb62de61b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:39 addons-062019 dockerd[1339]: time="2024-08-31T22:19:39.997848396Z" level=info msg="ignoring event" container=83bc162f801b094f1d9307bec1090e13d2f1a04b398c8fa020403f078f8e6bb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.454490021Z" level=info msg="ignoring event" container=c374f63034969b2524322c33a590167bff98c1190478dd859af9aba70b890d7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.557408132Z" level=info msg="ignoring event" container=5579b1c2eb62a4ee916c2e05ef0b013568da82eb9c1a942e906f85fa7c3c592e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.565838886Z" level=info msg="ignoring event" container=c835799197b6a8716a490eb6eb008e89ddcf1374b7e89543b9b24d3e6efa00d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.567430973Z" level=info msg="ignoring event" container=9a9f3f819a1dd16aee3ac22c2743ae0ca7102904ef88f058a7506c8b3d178384 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.650531124Z" level=info msg="ignoring event" container=a48587137ee9374fb459ad5a86a65cfc1604f202f346ce26088282bee19bedc9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.650605823Z" level=info msg="ignoring event" container=ed0a9325da01a015c7a280fd3fd62cc2c6a2ba0e916e02f0fad9ee7cb6e9aeb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.651079756Z" level=info msg="ignoring event" container=f92c6994c98a3ebd291f0f872d941016ef39af30083a700f52626ce9bff1050d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.664086247Z" level=info msg="ignoring event" container=0713342a3a7d9ac507ef1cc3759b80ff1ce4eaa542d64e5cab3e8483c54206bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.889050430Z" level=info msg="ignoring event" container=b1801ab12b79394001e4e999e7cd245f178784c0914bf4b16709daa10c75282c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.971468490Z" level=info msg="ignoring event" container=ca9e9443debe453a64d9fd33cd5807e35212ad02dcdbe2c3695744e33057a4e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:41 addons-062019 dockerd[1339]: time="2024-08-31T22:19:41.982408002Z" level=info msg="ignoring event" container=072e5e04caf7b123a0eca52982f8a97d26db99f918e73e9bde3a1afa37f06966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:44 addons-062019 dockerd[1339]: time="2024-08-31T22:19:44.683503349Z" level=info msg="ignoring event" container=973c62fea2e196d858955c70a3904dc05a6987ff66697c60b2b2aff3df849e5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.011836545Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.033052670Z" level=info msg="ignoring event" container=7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.182361406Z" level=info msg="ignoring event" container=60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.253922176Z" level=info msg="ignoring event" container=f6ea7a180980d9461439762ff0364002c6c50b8194121d5fc3d36db8fda37ab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.265373354Z" level=info msg="ignoring event" container=5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.391799209Z" level=info msg="ignoring event" container=21677d2bef612cbab3161529209d498026626b04c2db12f58cb2cac68b4ac888 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-062019 dockerd[1339]: time="2024-08-31T22:19:45.475556403Z" level=info msg="ignoring event" container=20e21cb718a208d40e2693d52c633cc1e46b356ba6a548d2675391b4f1d2721e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                     CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	3fa6db8218f1a       a416a98b71e22                                                                                                             31 seconds ago       Exited              helper-pod                   0                   f576d1829ec06       helper-pod-delete-pvc-20dcccd8-e7fe-4ed6-82bc-9f7db35d0a45
	87aa5ce08d031       busybox@sha256:82742949a3709938cbeb9cec79f5eaf3e48b255389f2dcedf2de29ef96fd841c                                           35 seconds ago       Exited              busybox                      0                   3a6078fe68945       test-local-path
	d15400b5e3d76       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                           41 seconds ago       Exited              helper-pod                   0                   b71801a5d33ca       helper-pod-create-pvc-20dcccd8-e7fe-4ed6-82bc-9f7db35d0a45
	b1adb372963c8       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                     43 seconds ago       Running             headlamp                     0                   fe724126f2f1a       headlamp-57fb76fcdb-6bdgb
	326882d6d52f6       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                               50 seconds ago       Running             hello-world-app              0                   c0c8c03a5cf54       hello-world-app-55bf9c44b4-x8t8x
	db7d0beeee49b       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                             About a minute ago   Running             nginx                        0                   6da2c2ef06b0e       nginx
	c1cfddd5d13d3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb              9 minutes ago        Running             gcp-auth                     0                   ed3a00eb91237       gcp-auth-89d5ffd79-7lnx6
	d57c465ca0366       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   11 minutes ago       Running             volume-snapshot-controller   0                   0fd7fe104b32d       snapshot-controller-56fcc65765-bd79z
	6373f5a0110de       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   11 minutes ago       Running             volume-snapshot-controller   0                   0f4d0206e3ed7       snapshot-controller-56fcc65765-qhqhv
	5c0c260e16178       6e38f40d628db                                                                                                             12 minutes ago       Running             storage-provisioner          0                   ee6fa79562425       storage-provisioner
	4849dd90796f8       cbb01a7bd410d                                                                                                             12 minutes ago       Running             coredns                      0                   5c1698fed377c       coredns-6f6b679f8f-4lbvv
	fbd2270998c13       ad83b2ca7b09e                                                                                                             12 minutes ago       Running             kube-proxy                   0                   5c25dae1c4629       kube-proxy-fkhrj
	f16f281ad697d       2e96e5913fc06                                                                                                             12 minutes ago       Running             etcd                         0                   c6a6b17f58a04       etcd-addons-062019
	875cd63278f37       604f5db92eaa8                                                                                                             12 minutes ago       Running             kube-apiserver               0                   551fdc884635f       kube-apiserver-addons-062019
	41a1f48dbf7e6       045733566833c                                                                                                             12 minutes ago       Running             kube-controller-manager      0                   463dc2aef6530       kube-controller-manager-addons-062019
	890186fd639ae       1766f54c897f0                                                                                                             12 minutes ago       Running             kube-scheduler               0                   babf76cbca724       kube-scheduler-addons-062019
	
	
	==> coredns [4849dd90796f] <==
	[INFO] 10.244.0.7:53233 - 53501 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091428s
	[INFO] 10.244.0.7:40829 - 34761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041127s
	[INFO] 10.244.0.7:40829 - 52687 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063947s
	[INFO] 10.244.0.7:53517 - 28700 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003124759s
	[INFO] 10.244.0.7:53517 - 49411 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004133257s
	[INFO] 10.244.0.7:34447 - 58055 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003563301s
	[INFO] 10.244.0.7:34447 - 34506 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003658015s
	[INFO] 10.244.0.7:33356 - 47553 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004621099s
	[INFO] 10.244.0.7:33356 - 48892 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005015944s
	[INFO] 10.244.0.7:46037 - 21616 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055085s
	[INFO] 10.244.0.7:46037 - 638 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00007634s
	[INFO] 10.244.0.26:46334 - 52217 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000365643s
	[INFO] 10.244.0.26:33969 - 51268 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000428319s
	[INFO] 10.244.0.26:42727 - 39095 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134876s
	[INFO] 10.244.0.26:48684 - 48726 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185944s
	[INFO] 10.244.0.26:48298 - 51572 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010291s
	[INFO] 10.244.0.26:54679 - 32059 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153534s
	[INFO] 10.244.0.26:55510 - 35919 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007150841s
	[INFO] 10.244.0.26:34721 - 48137 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007463103s
	[INFO] 10.244.0.26:55950 - 21091 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004999863s
	[INFO] 10.244.0.26:44117 - 30057 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005267922s
	[INFO] 10.244.0.26:58815 - 14686 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00389499s
	[INFO] 10.244.0.26:33164 - 12104 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005247348s
	[INFO] 10.244.0.26:39718 - 15975 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.001576627s
	[INFO] 10.244.0.26:55154 - 715 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.001673367s
	
	
	==> describe nodes <==
	Name:               addons-062019
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-062019
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-062019
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_06_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-062019
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:06:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-062019
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:19:34 +0000   Sat, 31 Aug 2024 22:06:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:19:34 +0000   Sat, 31 Aug 2024 22:06:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:19:34 +0000   Sat, 31 Aug 2024 22:06:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:19:34 +0000   Sat, 31 Aug 2024 22:06:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-062019
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ca2e045a0634f37a9af25e128d79988
	  System UUID:                0dfa9fce-0710-4e20-b6fe-d1ed3f4d2808
	  Boot ID:                    42f24c18-34a3-41a9-b4d5-869da0da75be
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-x8t8x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  gcp-auth                    gcp-auth-89d5ffd79-7lnx6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-6bdgb                0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 coredns-6f6b679f8f-4lbvv                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-062019                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-062019             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-062019    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-fkhrj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-062019             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-bd79z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-qhqhv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-062019 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-062019 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-062019 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-062019 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-062019 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-062019 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-062019 event: Registered Node addons-062019 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f5 4f 17 b6 29 08 06
	[  +1.909587] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 b2 ba b2 b8 08 08 06
	[  +2.322268] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 26 2b 4c d7 53 08 06
	[  +5.187437] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce e4 4f 39 09 1b 08 06
	[  +0.926330] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 59 85 0c 0b 5e 08 06
	[  +0.089599] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 c3 1c 1c 91 21 08 06
	[  +5.400519] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 6e 1a cb 7a 1e 08 06
	[Aug31 22:09] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff ee 46 5c c6 a8 f3 08 06
	[  +0.326102] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 07 c6 78 26 0e 08 06
	[ +27.045193] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da be 0b cd 89 71 08 06
	[  +0.000543] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9e e5 87 91 f6 10 08 06
	[Aug31 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 c7 be bb 61 3b 08 06
	[ +11.021041] IPv4: martian source 10.244.0.30 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 6e 1a cb 7a 1e 08 06
	
	
	==> etcd [f16f281ad697] <==
	{"level":"info","ts":"2024-08-31T22:06:54.853352Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T22:06:55.082149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:55.082203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:55.082231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:55.082253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:55.082261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:55.082275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:55.082288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:55.083297Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-062019 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:06:55.083310Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:55.083371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:55.083407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:55.083681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:55.083718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:55.084069Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:55.084149Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:55.084169Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:55.084517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:55.084546Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:55.085377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:06:55.085825Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-08-31T22:07:23.628548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.717034ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031588875978057 > lease_revoke:<id:70cc91aa7a1b40cf>","response":"size:29"}
	{"level":"info","ts":"2024-08-31T22:16:55.394564Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1905}
	{"level":"info","ts":"2024-08-31T22:16:55.417739Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1905,"took":"22.661495ms","hash":704430521,"current-db-size-bytes":8814592,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":5050368,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-08-31T22:16:55.417781Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":704430521,"revision":1905,"compact-revision":-1}
	
	
	==> gcp-auth [c1cfddd5d13d] <==
	2024/08/31 22:10:32 Ready to write response ...
	2024/08/31 22:18:39 Ready to marshal response ...
	2024/08/31 22:18:39 Ready to write response ...
	2024/08/31 22:18:40 Ready to marshal response ...
	2024/08/31 22:18:40 Ready to write response ...
	2024/08/31 22:18:44 Ready to marshal response ...
	2024/08/31 22:18:44 Ready to write response ...
	2024/08/31 22:18:53 Ready to marshal response ...
	2024/08/31 22:18:53 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:19:02 Ready to marshal response ...
	2024/08/31 22:19:02 Ready to write response ...
	2024/08/31 22:19:02 Ready to marshal response ...
	2024/08/31 22:19:02 Ready to write response ...
	2024/08/31 22:19:10 Ready to marshal response ...
	2024/08/31 22:19:10 Ready to write response ...
	2024/08/31 22:19:14 Ready to marshal response ...
	2024/08/31 22:19:14 Ready to write response ...
	2024/08/31 22:19:32 Ready to marshal response ...
	2024/08/31 22:19:32 Ready to write response ...
	
	
	==> kernel <==
	 22:19:46 up  1:02,  0 users,  load average: 0.49, 0.30, 0.23
	Linux addons-062019 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [875cd63278f3] <==
	I0831 22:10:06.614295       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0831 22:10:22.155492       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:22.172585       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:22.478490       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:10:22.485139       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:10:22.573454       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:22.672790       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:10:23.062631       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:23.173116       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:23.268705       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0831 22:10:23.352130       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0831 22:10:23.663071       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0831 22:10:23.673165       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0831 22:10:23.757359       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0831 22:10:24.050662       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0831 22:10:24.269307       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0831 22:10:24.571974       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0831 22:18:39.858287       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0831 22:18:40.177282       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:18:40.401304       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.241.180"}
	W0831 22:18:40.885745       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:18:53.888559       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.188.147"}
	I0831 22:18:59.089652       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.32.69"}
	I0831 22:19:19.411797       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0831 22:19:30.461321       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [41a1f48dbf7e] <==
	I0831 22:19:04.334366       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 22:19:05.023896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="95.936µs"
	I0831 22:19:05.038749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.284789ms"
	I0831 22:19:05.038885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="96.213µs"
	I0831 22:19:05.664640       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0831 22:19:06.533592       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:06.533642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:14.991322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="9.54µs"
	W0831 22:19:19.710545       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:19.710586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:19.959163       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:19.959198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:27.325563       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:27.325601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:27.519237       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:27.519275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:31.726011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:31.726050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:34.678404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-062019"
	W0831 22:19:35.419047       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:35.419094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:41.330804       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0831 22:19:41.380590       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0831 22:19:41.673088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-062019"
	I0831 22:19:45.117253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="8.476µs"
	
	
	==> kube-proxy [fbd2270998c1] <==
	I0831 22:07:04.708799       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:07:04.881941       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:07:04.882015       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:07:05.071312       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:07:05.071391       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:07:05.149813       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:07:05.151724       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:07:05.151749       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:07:05.154174       1 config.go:326] "Starting node config controller"
	I0831 22:07:05.154195       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:07:05.154317       1 config.go:197] "Starting service config controller"
	I0831 22:07:05.154326       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:07:05.154342       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:07:05.154347       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:07:05.259017       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:07:05.259109       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:07:05.259163       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [890186fd639a] <==
	W0831 22:06:56.858108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0831 22:06:56.858185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0831 22:06:56.858111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:06:56.858212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0831 22:06:56.858217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0831 22:06:56.858185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.663973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:57.664012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.705320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:06:57.705367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.711612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:06:57.711649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.724921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:06:57.724953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.767444       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:06:57.767476       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:06:57.824961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:06:57.825008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.875728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:06:57.875770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.902130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:06:57.902192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:57.903046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:57.903075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0831 22:06:59.653754       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:19:44 addons-062019 kubelet[2437]: I0831 22:19:44.977413    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gxlld\" (UniqueName: \"kubernetes.io/projected/24e1bdde-ad75-42ca-978f-2075eb2cf751-kube-api-access-gxlld\") on node \"addons-062019\" DevicePath \"\""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.481829    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02556ea-994e-4673-8bd5-a2e2042f4691-config-volume\") pod \"a02556ea-994e-4673-8bd5-a2e2042f4691\" (UID: \"a02556ea-994e-4673-8bd5-a2e2042f4691\") "
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.481878    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzts7\" (UniqueName: \"kubernetes.io/projected/a02556ea-994e-4673-8bd5-a2e2042f4691-kube-api-access-rzts7\") pod \"a02556ea-994e-4673-8bd5-a2e2042f4691\" (UID: \"a02556ea-994e-4673-8bd5-a2e2042f4691\") "
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.482352    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02556ea-994e-4673-8bd5-a2e2042f4691-config-volume" (OuterVolumeSpecName: "config-volume") pod "a02556ea-994e-4673-8bd5-a2e2042f4691" (UID: "a02556ea-994e-4673-8bd5-a2e2042f4691"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.483530    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02556ea-994e-4673-8bd5-a2e2042f4691-kube-api-access-rzts7" (OuterVolumeSpecName: "kube-api-access-rzts7") pod "a02556ea-994e-4673-8bd5-a2e2042f4691" (UID: "a02556ea-994e-4673-8bd5-a2e2042f4691"). InnerVolumeSpecName "kube-api-access-rzts7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.582380    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzx24\" (UniqueName: \"kubernetes.io/projected/ea0149ab-7745-43b6-8b62-1ea10549905c-kube-api-access-xzx24\") pod \"ea0149ab-7745-43b6-8b62-1ea10549905c\" (UID: \"ea0149ab-7745-43b6-8b62-1ea10549905c\") "
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.582489    2437 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02556ea-994e-4673-8bd5-a2e2042f4691-config-volume\") on node \"addons-062019\" DevicePath \"\""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.582506    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rzts7\" (UniqueName: \"kubernetes.io/projected/a02556ea-994e-4673-8bd5-a2e2042f4691-kube-api-access-rzts7\") on node \"addons-062019\" DevicePath \"\""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.584155    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea0149ab-7745-43b6-8b62-1ea10549905c-kube-api-access-xzx24" (OuterVolumeSpecName: "kube-api-access-xzx24") pod "ea0149ab-7745-43b6-8b62-1ea10549905c" (UID: "ea0149ab-7745-43b6-8b62-1ea10549905c"). InnerVolumeSpecName "kube-api-access-xzx24". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.682894    2437 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtcq5\" (UniqueName: \"kubernetes.io/projected/8518e062-22c3-4792-8477-519c3acc1417-kube-api-access-xtcq5\") pod \"8518e062-22c3-4792-8477-519c3acc1417\" (UID: \"8518e062-22c3-4792-8477-519c3acc1417\") "
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.683009    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xzx24\" (UniqueName: \"kubernetes.io/projected/ea0149ab-7745-43b6-8b62-1ea10549905c-kube-api-access-xzx24\") on node \"addons-062019\" DevicePath \"\""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.685025    2437 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8518e062-22c3-4792-8477-519c3acc1417-kube-api-access-xtcq5" (OuterVolumeSpecName: "kube-api-access-xtcq5") pod "8518e062-22c3-4792-8477-519c3acc1417" (UID: "8518e062-22c3-4792-8477-519c3acc1417"). InnerVolumeSpecName "kube-api-access-xtcq5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.753539    2437 scope.go:117] "RemoveContainer" containerID="60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.774312    2437 scope.go:117] "RemoveContainer" containerID="60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: E0831 22:19:45.775016    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd" containerID="60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.775060    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd"} err="failed to get container status \"60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 60bf11577d020b479bc772ad1f015fe2efbccfb0cdbbadafed74cfebbf7bfddd"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.775087    2437 scope.go:117] "RemoveContainer" containerID="5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.783672    2437 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xtcq5\" (UniqueName: \"kubernetes.io/projected/8518e062-22c3-4792-8477-519c3acc1417-kube-api-access-xtcq5\") on node \"addons-062019\" DevicePath \"\""
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.790238    2437 scope.go:117] "RemoveContainer" containerID="5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: E0831 22:19:45.791806    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3" containerID="5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.791846    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3"} err="failed to get container status \"5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5501940ec0809f13dba5092449d3150ed8a38bffa52300207e55e0d11eb9b0d3"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.791877    2437 scope.go:117] "RemoveContainer" containerID="7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.805304    2437 scope.go:117] "RemoveContainer" containerID="7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: E0831 22:19:45.806116    2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e" containerID="7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e"
	Aug 31 22:19:45 addons-062019 kubelet[2437]: I0831 22:19:45.806157    2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e"} err="failed to get container status \"7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7a9aed6027d2a2ad85f199a21308ed36f94f4867c1aaa12579d852518233109e"
	
	
	==> storage-provisioner [5c0c260e1617] <==
	I0831 22:07:12.665804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:07:12.765051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:07:12.765096       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:07:12.851328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:07:12.852630       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6cd8fa78-65f5-4d1d-975a-d14c31bc9809", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-062019_dff50ba6-72d7-4188-a1eb-d9a6b1e12984 became leader
	I0831 22:07:12.854050       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-062019_dff50ba6-72d7-4188-a1eb-d9a6b1e12984!
	I0831 22:07:12.954877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-062019_dff50ba6-72d7-4188-a1eb-d9a6b1e12984!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-062019 -n addons-062019
helpers_test.go:262: (dbg) Run:  kubectl --context addons-062019 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-062019 describe pod busybox
helpers_test.go:283: (dbg) kubectl --context addons-062019 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-062019/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:10:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b6b4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8b6b4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to addons-062019
	  Normal   Pulling    7m44s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.58s)

                                                
                                    

Test pass (332/353)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 11.13
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.18
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.85
21 TestBinaryMirror 0.94
22 TestOffline 75.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 210.44
29 TestAddons/serial/Volcano 40.6
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 22.83
35 TestAddons/parallel/InspektorGadget 10.98
36 TestAddons/parallel/MetricsServer 6.54
37 TestAddons/parallel/HelmTiller 10.98
39 TestAddons/parallel/CSI 51
40 TestAddons/parallel/Headlamp 12.05
41 TestAddons/parallel/CloudSpanner 5.48
42 TestAddons/parallel/LocalPath 55.3
43 TestAddons/parallel/NvidiaDevicePlugin 6.41
44 TestAddons/parallel/Yakd 11.7
45 TestAddons/StoppedEnableDisable 10.96
46 TestCertOptions 31.08
47 TestCertExpiration 227.82
48 TestDockerFlags 27.73
49 TestForceSystemdFlag 26.02
50 TestForceSystemdEnv 27.68
52 TestKVMDriverInstallOrUpdate 4.51
56 TestErrorSpam/setup 21.19
57 TestErrorSpam/start 0.56
58 TestErrorSpam/status 0.84
59 TestErrorSpam/pause 1.14
60 TestErrorSpam/unpause 1.38
61 TestErrorSpam/stop 1.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 35.17
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.05
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.34
73 TestFunctional/serial/CacheCmd/cache/add_local 1.4
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.23
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 38.11
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 0.95
84 TestFunctional/serial/LogsFileCmd 0.96
85 TestFunctional/serial/InvalidService 4.74
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 25.81
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.12
95 TestFunctional/parallel/ServiceCmdConnect 7.8
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 44.7
99 TestFunctional/parallel/SSHCmd 0.56
100 TestFunctional/parallel/CpCmd 1.7
101 TestFunctional/parallel/MySQL 24.59
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.57
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
111 TestFunctional/parallel/License 0.44
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.44
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.24
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.94
125 TestFunctional/parallel/ImageCommands/Setup 1.84
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.85
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.76
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.54
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
133 TestFunctional/parallel/ServiceCmd/List 0.42
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
137 TestFunctional/parallel/ProfileCmd/profile_list 0.38
138 TestFunctional/parallel/ServiceCmd/Format 0.39
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
140 TestFunctional/parallel/ServiceCmd/URL 0.44
141 TestFunctional/parallel/DockerEnv/bash 1.11
142 TestFunctional/parallel/MountCmd/any-port 7.92
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/MountCmd/specific-port 1.64
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 96.34
161 TestMultiControlPlane/serial/DeployApp 5.41
162 TestMultiControlPlane/serial/PingHostFromPods 1.01
163 TestMultiControlPlane/serial/AddWorkerNode 23.37
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
166 TestMultiControlPlane/serial/CopyFile 15.05
167 TestMultiControlPlane/serial/StopSecondaryNode 11.36
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
169 TestMultiControlPlane/serial/RestartSecondaryNode 21.46
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.32
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 211.8
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.24
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
174 TestMultiControlPlane/serial/StopCluster 32.47
175 TestMultiControlPlane/serial/RestartCluster 85.03
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
177 TestMultiControlPlane/serial/AddSecondaryNode 34.78
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestImageBuild/serial/Setup 21.51
182 TestImageBuild/serial/NormalBuild 2.68
183 TestImageBuild/serial/BuildWithBuildArg 0.97
184 TestImageBuild/serial/BuildWithDockerIgnore 0.77
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.69
189 TestJSONOutput/start/Command 38.72
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.53
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.41
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.72
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 23.3
215 TestKicCustomNetwork/use_default_bridge_network 25.68
216 TestKicExistingNetwork 22.47
217 TestKicCustomSubnet 22.46
218 TestKicStaticIP 25.84
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 53.17
223 TestMountStart/serial/StartWithMountFirst 7
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 7.23
226 TestMountStart/serial/VerifyMountSecond 0.23
227 TestMountStart/serial/DeleteFirst 1.44
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.16
230 TestMountStart/serial/RestartStopped 8.62
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestContainerIPsMultiNetwork/serial/CreateExtnet 0.06
235 TestContainerIPsMultiNetwork/serial/FreshStart 61.2
236 TestContainerIPsMultiNetwork/serial/ConnectExtnet 0.11
237 TestContainerIPsMultiNetwork/serial/Stop 10.9
238 TestContainerIPsMultiNetwork/serial/VerifyStatus 0.12
239 TestContainerIPsMultiNetwork/serial/Start 12.19
240 TestContainerIPsMultiNetwork/serial/VerifyNetworks 0.02
241 TestContainerIPsMultiNetwork/serial/Delete 2.27
242 TestContainerIPsMultiNetwork/serial/DeleteExtnet 0.11
243 TestContainerIPsMultiNetwork/serial/VerifyDeletedResources 0.1
246 TestMultiNode/serial/FreshStart2Nodes 73.54
247 TestMultiNode/serial/DeployApp2Nodes 38.69
248 TestMultiNode/serial/PingHostFrom2Pods 0.68
249 TestMultiNode/serial/AddNode 18.39
250 TestMultiNode/serial/MultiNodeLabels 0.07
251 TestMultiNode/serial/ProfileList 0.31
252 TestMultiNode/serial/CopyFile 8.56
253 TestMultiNode/serial/StopNode 2.04
254 TestMultiNode/serial/StartAfterStop 9.68
255 TestMultiNode/serial/RestartKeepsNodes 111.54
256 TestMultiNode/serial/DeleteNode 5.1
257 TestMultiNode/serial/StopMultiNode 21.39
258 TestMultiNode/serial/RestartMultiNode 53.68
259 TestMultiNode/serial/ValidateNameConflict 26.06
264 TestPreload 136.7
266 TestScheduledStopUnix 97.01
267 TestSkaffold 99.98
269 TestInsufficientStorage 12.31
270 TestRunningBinaryUpgrade 82.95
272 TestKubernetesUpgrade 335.47
273 TestMissingContainerUpgrade 195.32
274 TestStoppedBinaryUpgrade/Setup 2.48
275 TestStoppedBinaryUpgrade/Upgrade 153.8
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
285 TestPause/serial/Start 65.97
287 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
288 TestNoKubernetes/serial/StartWithK8s 28.5
289 TestNoKubernetes/serial/StartWithStopK8s 16.63
301 TestNoKubernetes/serial/Start 8.36
302 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
303 TestNoKubernetes/serial/ProfileList 1.48
304 TestNoKubernetes/serial/Stop 1.18
305 TestNoKubernetes/serial/StartNoArgs 7.49
306 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
307 TestPause/serial/SecondStartNoReconfiguration 36.5
308 TestPause/serial/Pause 1.02
309 TestPause/serial/VerifyStatus 0.27
310 TestPause/serial/Unpause 0.57
311 TestPause/serial/PauseAgain 0.66
312 TestPause/serial/DeletePaused 2.14
313 TestPause/serial/VerifyDeletedResources 0.5
315 TestStartStop/group/old-k8s-version/serial/FirstStart 131.93
317 TestStartStop/group/no-preload/serial/FirstStart 67.12
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.78
320 TestStartStop/group/no-preload/serial/DeployApp 8.24
321 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.78
322 TestStartStop/group/no-preload/serial/Stop 10.79
323 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/no-preload/serial/SecondStart 262.63
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
326 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.73
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.65
329 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
330 TestStartStop/group/old-k8s-version/serial/Stop 10.67
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.12
333 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
334 TestStartStop/group/old-k8s-version/serial/SecondStart 140.99
336 TestStartStop/group/newest-cni/serial/FirstStart 26.06
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
339 TestStartStop/group/newest-cni/serial/Stop 10.68
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/newest-cni/serial/SecondStart 13.44
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
345 TestStartStop/group/newest-cni/serial/Pause 2.54
347 TestStartStop/group/embed-certs/serial/FirstStart 69.16
348 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
350 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
351 TestStartStop/group/old-k8s-version/serial/Pause 2.32
352 TestNetworkPlugins/group/auto/Start 68.32
353 TestStartStop/group/embed-certs/serial/DeployApp 9.24
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
355 TestStartStop/group/embed-certs/serial/Stop 10.77
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
357 TestStartStop/group/embed-certs/serial/SecondStart 262.45
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
360 TestNetworkPlugins/group/auto/KubeletFlags 0.3
361 TestNetworkPlugins/group/auto/NetCatPod 10.21
362 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
363 TestStartStop/group/no-preload/serial/Pause 2.26
364 TestNetworkPlugins/group/kindnet/Start 56
365 TestNetworkPlugins/group/auto/DNS 0.16
366 TestNetworkPlugins/group/auto/Localhost 0.13
367 TestNetworkPlugins/group/auto/HairPin 0.12
368 TestNetworkPlugins/group/calico/Start 55.35
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
373 TestNetworkPlugins/group/custom-flannel/Start 48.8
374 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
375 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
376 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
377 TestNetworkPlugins/group/kindnet/DNS 0.13
378 TestNetworkPlugins/group/kindnet/Localhost 0.13
379 TestNetworkPlugins/group/kindnet/HairPin 0.13
380 TestNetworkPlugins/group/calico/ControllerPod 6.01
381 TestNetworkPlugins/group/calico/KubeletFlags 0.35
382 TestNetworkPlugins/group/calico/NetCatPod 10.24
383 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
384 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
385 TestNetworkPlugins/group/calico/DNS 0.15
386 TestNetworkPlugins/group/calico/Localhost 0.11
387 TestNetworkPlugins/group/calico/HairPin 0.11
388 TestNetworkPlugins/group/false/Start 42.61
389 TestNetworkPlugins/group/custom-flannel/DNS 0.14
390 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
391 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
392 TestNetworkPlugins/group/enable-default-cni/Start 36.52
393 TestNetworkPlugins/group/flannel/Start 44.82
394 TestNetworkPlugins/group/false/KubeletFlags 0.26
395 TestNetworkPlugins/group/false/NetCatPod 10.2
396 TestNetworkPlugins/group/false/DNS 21.05
397 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
398 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
399 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
400 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
401 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
402 TestNetworkPlugins/group/false/Localhost 0.12
403 TestNetworkPlugins/group/false/HairPin 0.11
404 TestNetworkPlugins/group/flannel/ControllerPod 6.01
405 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
406 TestNetworkPlugins/group/flannel/NetCatPod 9.18
407 TestNetworkPlugins/group/bridge/Start 39
408 TestNetworkPlugins/group/flannel/DNS 0.16
409 TestNetworkPlugins/group/flannel/Localhost 0.14
410 TestNetworkPlugins/group/flannel/HairPin 0.12
411 TestNetworkPlugins/group/kubenet/Start 34.46
412 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
413 TestNetworkPlugins/group/bridge/NetCatPod 9.19
414 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
415 TestNetworkPlugins/group/kubenet/NetCatPod 10.17
416 TestNetworkPlugins/group/bridge/DNS 21.69
417 TestNetworkPlugins/group/kubenet/DNS 0.13
418 TestNetworkPlugins/group/kubenet/Localhost 0.1
419 TestNetworkPlugins/group/kubenet/HairPin 0.1
420 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
421 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
422 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
423 TestStartStop/group/embed-certs/serial/Pause 2.44
424 TestNetworkPlugins/group/bridge/Localhost 0.13
425 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (22.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-526099 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-526099 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (22.766251984s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-526099
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-526099: exit status 85 (57.031018ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-526099 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |          |
	|         | -p download-only-526099        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:43.231592   19789 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:43.231834   19789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:43.231842   19789 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:43.231847   19789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:43.232004   19789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	W0831 22:05:43.232102   19789 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-12963/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-12963/.minikube/config/config.json: no such file or directory
	I0831 22:05:43.232643   19789 out.go:352] Setting JSON to true
	I0831 22:05:43.233498   19789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2891,"bootTime":1725139052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:05:43.233568   19789 start.go:139] virtualization: kvm guest
	I0831 22:05:43.235755   19789 out.go:97] [download-only-526099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0831 22:05:43.235850   19789 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:05:43.235890   19789 notify.go:220] Checking for updates...
	I0831 22:05:43.237174   19789 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:43.238410   19789 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:43.239640   19789 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:05:43.240929   19789 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	I0831 22:05:43.242034   19789 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0831 22:05:43.244117   19789 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:05:43.244363   19789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:05:43.265373   19789 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:05:43.265493   19789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:43.625733   19789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 22:05:43.616957172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:05:43.625835   19789 docker.go:307] overlay module found
	I0831 22:05:43.627420   19789 out.go:97] Using the docker driver based on user configuration
	I0831 22:05:43.627446   19789 start.go:297] selected driver: docker
	I0831 22:05:43.627451   19789 start.go:901] validating driver "docker" against <nil>
	I0831 22:05:43.627531   19789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:43.676033   19789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 22:05:43.667660609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:05:43.676194   19789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:05:43.676719   19789 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0831 22:05:43.676897   19789 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:05:43.678487   19789 out.go:169] Using Docker driver with root privileges
	I0831 22:05:43.679586   19789 cni.go:84] Creating CNI manager for ""
	I0831 22:05:43.679614   19789 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 22:05:43.679693   19789 start.go:340] cluster config:
	{Name:download-only-526099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-526099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:05:43.681153   19789 out.go:97] Starting "download-only-526099" primary control-plane node in "download-only-526099" cluster
	I0831 22:05:43.681179   19789 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:05:43.682521   19789 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:05:43.682549   19789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:43.682577   19789 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:05:43.697827   19789 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:05:43.697999   19789 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:05:43.698107   19789 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:05:43.855187   19789 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0831 22:05:43.855215   19789 cache.go:56] Caching tarball of preloaded images
	I0831 22:05:43.855365   19789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:43.857156   19789 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 22:05:43.857174   19789 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0831 22:05:43.962301   19789 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0831 22:05:54.559794   19789 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0831 22:05:54.559890   19789 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0831 22:05:55.328839   19789 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 22:05:55.329271   19789 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/download-only-526099/config.json ...
	I0831 22:05:55.329302   19789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/download-only-526099/config.json: {Name:mkb2f62bd00f01418497adb7de062e841699f1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:05:55.329471   19789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:55.329646   19789 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-12963/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-526099 host does not exist
	  To start a cluster, run: "minikube start -p download-only-526099"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-526099
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (11.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-316100 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-316100 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.132328243s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (11.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-316100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-316100: exit status 85 (53.58156ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-526099 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-526099        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-526099        | download-only-526099 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | -o=json --download-only        | download-only-316100 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | -p download-only-316100        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:06
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:06.368136   20187 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:06.368364   20187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:06.368372   20187 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:06.368376   20187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:06.368536   20187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:06:06.369049   20187 out.go:352] Setting JSON to true
	I0831 22:06:06.369881   20187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2914,"bootTime":1725139052,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:06.369940   20187 start.go:139] virtualization: kvm guest
	I0831 22:06:06.372213   20187 out.go:97] [download-only-316100] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:06.372324   20187 notify.go:220] Checking for updates...
	I0831 22:06:06.373802   20187 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:06:06.375344   20187 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:06.376811   20187 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:06:06.378329   20187 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	I0831 22:06:06.379584   20187 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0831 22:06:06.381837   20187 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:06:06.382041   20187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:06.403183   20187 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:06:06.403304   20187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:06.449603   20187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:06:06.440888391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:06:06.449704   20187 docker.go:307] overlay module found
	I0831 22:06:06.451687   20187 out.go:97] Using the docker driver based on user configuration
	I0831 22:06:06.451722   20187 start.go:297] selected driver: docker
	I0831 22:06:06.451731   20187 start.go:901] validating driver "docker" against <nil>
	I0831 22:06:06.451803   20187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:06.498373   20187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:06:06.490218225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:06:06.498518   20187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:06.498998   20187 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0831 22:06:06.499120   20187 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:06:06.500839   20187 out.go:169] Using Docker driver with root privileges
	I0831 22:06:06.502035   20187 cni.go:84] Creating CNI manager for ""
	I0831 22:06:06.502065   20187 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:06.502075   20187 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:06.502146   20187 start.go:340] cluster config:
	{Name:download-only-316100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-316100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:06.503336   20187 out.go:97] Starting "download-only-316100" primary control-plane node in "download-only-316100" cluster
	I0831 22:06:06.503348   20187 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:06:06.504569   20187 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:06:06.504590   20187 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:06.504632   20187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:06:06.520491   20187 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:06:06.520612   20187 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:06:06.520627   20187 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:06:06.520631   20187 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:06:06.520638   20187 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:06:06.613227   20187 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0831 22:06:06.613257   20187 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:06.613410   20187 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:06.615243   20187 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 22:06:06.615259   20187 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0831 22:06:06.722540   20187 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0831 22:06:15.900494   20187 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0831 22:06:15.900589   20187 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-12963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-316100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-316100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-316100
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.85s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-159852 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "download-docker-159852" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-159852
--- PASS: TestDownloadOnlyKic (1.85s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-680673 --alsologtostderr --binary-mirror http://127.0.0.1:45077 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-680673" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-680673
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (75.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-125185 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-125185 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m13.260670426s)
helpers_test.go:176: Cleaning up "offline-docker-125185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-125185
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-125185: (2.064777925s)
--- PASS: TestOffline (75.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-062019
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-062019: exit status 85 (50.180597ms)

                                                
                                                
-- stdout --
	* Profile "addons-062019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-062019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-062019
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-062019: exit status 85 (49.141729ms)

                                                
                                                
-- stdout --
	* Profile "addons-062019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-062019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (210.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-062019 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-062019 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m30.435829681s)
--- PASS: TestAddons/Setup (210.44s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.6s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 10.176538ms
addons_test.go:913: volcano-controller stabilized in 10.215448ms
addons_test.go:897: volcano-scheduler stabilized in 10.276986ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-scheduler-576bc46687-88xbt" [fa551289-ec2f-47bc-b371-9d714cc1cfdf] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00295529s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-admission-77d7d48b68-7flvc" [ad3555cb-d09e-4d17-b3f3-78e168aa49c5] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003886124s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-controllers-56675bb4d5-4k54k" [ad6df19d-54b8-4749-9ca5-d9c0313365a6] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003522174s
addons_test.go:932: (dbg) Run:  kubectl --context addons-062019 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-062019 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-062019 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:345: "test-job-nginx-0" [e230477b-009b-443e-9aca-d3ff466a1cb3] Pending
helpers_test.go:345: "test-job-nginx-0" [e230477b-009b-443e-9aca-d3ff466a1cb3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "test-job-nginx-0" [e230477b-009b-443e-9aca-d3ff466a1cb3] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003891831s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable volcano --alsologtostderr -v=1: (10.262678897s)
--- PASS: TestAddons/serial/Volcano (40.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-062019 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-062019 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-062019 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-062019 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-062019 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [4a83af42-16e6-4c62-a49e-c909f0082378] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [4a83af42-16e6-4c62-a49e-c909f0082378] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003461492s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-062019 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable ingress-dns --alsologtostderr -v=1: (1.01164403s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable ingress --alsologtostderr -v=1: (7.60348795s)
--- PASS: TestAddons/parallel/Ingress (22.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-sr7l7" [1460fbfd-68b4-4fe5-bb6e-84daa32d21b9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00423759s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-062019
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-062019: (5.979706909s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.36431ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-f95vb" [f6b764bb-e039-4baa-bc7e-1cbb1dfa8c04] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003224088s
addons_test.go:417: (dbg) Run:  kubectl --context addons-062019 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.998583ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:345: "tiller-deploy-b48cc5f79-89fbx" [7c41af58-b085-4eb2-97a3-5ec58bd639c0] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002714869s
addons_test.go:475: (dbg) Run:  kubectl --context addons-062019 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-062019 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.199072032s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.171238ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-062019 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-062019 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [1b428e9b-6d8c-44f8-badb-eac70d8eed07] Pending
helpers_test.go:345: "task-pv-pod" [1b428e9b-6d8c-44f8-badb-eac70d8eed07] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [1b428e9b-6d8c-44f8-badb-eac70d8eed07] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003473341s
addons_test.go:590: (dbg) Run:  kubectl --context addons-062019 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-062019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-062019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-062019 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-062019 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-062019 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-062019 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [a1ec13a6-45ad-4e21-a2af-6e1bff0a8bc4] Pending
helpers_test.go:345: "task-pv-pod-restore" [a1ec13a6-45ad-4e21-a2af-6e1bff0a8bc4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [a1ec13a6-45ad-4e21-a2af-6e1bff0a8bc4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003742625s
addons_test.go:632: (dbg) Run:  kubectl --context addons-062019 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-062019 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-062019 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.502253647s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-062019 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-6bdgb" [7fbeb495-84a4-49a8-8af2-f8aae0c9ee81] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-6bdgb" [7fbeb495-84a4-49a8-8af2-f8aae0c9ee81] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003735004s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-79d9z" [d971c16a-ca42-4fac-9add-e05f19663b4e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002889749s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-062019
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-062019 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-062019 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [7623af69-47e6-419f-b0b9-c01d2c1cdf69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [7623af69-47e6-419f-b0b9-c01d2c1cdf69] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [7623af69-47e6-419f-b0b9-c01d2c1cdf69] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004220599s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-062019 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 ssh "cat /opt/local-path-provisioner/pvc-20dcccd8-e7fe-4ed6-82bc-9f7db35d0a45_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-062019 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-062019 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.470126794s)
--- PASS: TestAddons/parallel/LocalPath (55.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-cvd8z" [d6872087-20a9-403a-8d49-aaa43c16db51] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003208843s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-062019
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-vl842" [b971eb83-b4ec-4ee9-bcbe-2213e474f7ab] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002968666s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-062019 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-062019 addons disable yakd --alsologtostderr -v=1: (5.691413395s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.96s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-062019
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-062019: (10.721026024s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-062019
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-062019
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-062019
--- PASS: TestAddons/StoppedEnableDisable (10.96s)

                                                
                                    
x
+
TestCertOptions (31.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-038493 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-038493 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.441158728s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-038493 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-038493 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-038493 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-038493" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-038493
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-038493: (2.076341572s)
--- PASS: TestCertOptions (31.08s)

                                                
                                    
x
+
TestCertExpiration (227.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-105807 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-105807 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (26.206779636s)
E0831 22:55:31.636665   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:36.758683   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-105807 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-105807 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (19.590733796s)
helpers_test.go:176: Cleaning up "cert-expiration-105807" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-105807
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-105807: (2.017854561s)
--- PASS: TestCertExpiration (227.82s)

                                                
                                    
x
+
TestDockerFlags (27.73s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-033103 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0831 22:54:51.335509   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-033103 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.478737704s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-033103 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-033103 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-033103" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-033103
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-033103: (1.702491155s)
--- PASS: TestDockerFlags (27.73s)

                                                
                                    
x
+
TestForceSystemdFlag (26.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-801162 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-801162 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (23.497157818s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-801162 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-801162" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-801162
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-801162: (2.17248474s)
--- PASS: TestForceSystemdFlag (26.02s)

                                                
                                    
x
+
TestForceSystemdEnv (27.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-703975 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-703975 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.97313886s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-703975 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-703975" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-703975
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-703975: (2.383218884s)
--- PASS: TestForceSystemdEnv (27.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                    
x
+
TestErrorSpam/setup (21.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-285197 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-285197 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-285197 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-285197 --driver=docker  --container-runtime=docker: (21.185227725s)
--- PASS: TestErrorSpam/setup (21.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 pause
--- PASS: TestErrorSpam/pause (1.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 stop: (1.164689113s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285197 --log_dir /tmp/nospam-285197 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-12963/.minikube/files/etc/test/nested/copy/19777/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-369865 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (35.166155617s)
--- PASS: TestFunctional/serial/StartWithProxy (35.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-369865 --alsologtostderr -v=8: (35.049013827s)
functional_test.go:663: soft start took 35.049860527s for "functional-369865" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-369865 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-369865 /tmp/TestFunctionalserialCacheCmdcacheadd_local3165805890/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache add minikube-local-cache-test:functional-369865
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-369865 cache add minikube-local-cache-test:functional-369865: (1.078008673s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache delete minikube-local-cache-test:functional-369865
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-369865
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.912573ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 kubectl -- --context functional-369865 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-369865 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-369865 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.113671732s)
functional_test.go:761: restart took 38.113864546s for "functional-369865" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-369865 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 logs
--- PASS: TestFunctional/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 logs --file /tmp/TestFunctionalserialLogsFileCmd854709085/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-369865 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-369865
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-369865: exit status 115 (305.966859ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31822 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-369865 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-369865 delete -f testdata/invalidsvc.yaml: (1.24770749s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 config get cpus: exit status 14 (66.056683ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 config get cpus: exit status 14 (46.576715ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-369865 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-369865 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 74226: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-369865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (150.361585ms)

                                                
                                                
-- stdout --
	* [functional-369865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:22:54.134737   73658 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:22:54.134840   73658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:22:54.134850   73658 out.go:358] Setting ErrFile to fd 2...
	I0831 22:22:54.134854   73658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:22:54.135029   73658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:22:54.135651   73658 out.go:352] Setting JSON to false
	I0831 22:22:54.136908   73658 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3922,"bootTime":1725139052,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:22:54.136996   73658 start.go:139] virtualization: kvm guest
	I0831 22:22:54.139792   73658 out.go:177] * [functional-369865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:22:54.141334   73658 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:22:54.141405   73658 notify.go:220] Checking for updates...
	I0831 22:22:54.143824   73658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:22:54.144939   73658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:22:54.145996   73658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	I0831 22:22:54.147103   73658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:22:54.148368   73658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:22:54.149976   73658 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:22:54.150518   73658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:22:54.173914   73658 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:22:54.174079   73658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:22:54.231151   73658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 22:22:54.220631161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:22:54.231295   73658 docker.go:307] overlay module found
	I0831 22:22:54.232944   73658 out.go:177] * Using the docker driver based on existing profile
	I0831 22:22:54.234147   73658 start.go:297] selected driver: docker
	I0831 22:22:54.234174   73658 start.go:901] validating driver "docker" against &{Name:functional-369865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-369865 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:22:54.234296   73658 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:22:54.236967   73658 out.go:201] 
	W0831 22:22:54.238478   73658 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:22:54.239700   73658 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-369865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (140.384838ms)

                                                
                                                
-- stdout --
	* [functional-369865] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:22:54.472982   73889 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:22:54.473321   73889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:22:54.473333   73889 out.go:358] Setting ErrFile to fd 2...
	I0831 22:22:54.473339   73889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:22:54.473728   73889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:22:54.474414   73889 out.go:352] Setting JSON to false
	I0831 22:22:54.475545   73889 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3922,"bootTime":1725139052,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:22:54.475604   73889 start.go:139] virtualization: kvm guest
	I0831 22:22:54.477970   73889 out.go:177] * [functional-369865] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0831 22:22:54.479505   73889 notify.go:220] Checking for updates...
	I0831 22:22:54.479516   73889 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:22:54.480847   73889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:22:54.482244   73889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	I0831 22:22:54.483700   73889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	I0831 22:22:54.485029   73889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:22:54.486448   73889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:22:54.487964   73889 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:22:54.488426   73889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:22:54.511001   73889 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:22:54.511134   73889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:22:54.557759   73889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 22:22:54.548220733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:22:54.557871   73889 docker.go:307] overlay module found
	I0831 22:22:54.559564   73889 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0831 22:22:54.561657   73889 start.go:297] selected driver: docker
	I0831 22:22:54.561678   73889 start.go:901] validating driver "docker" against &{Name:functional-369865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-369865 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:22:54.561796   73889 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:22:54.564690   73889 out.go:201] 
	W0831 22:22:54.566115   73889 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:22:54.568044   73889 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-369865 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-369865 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-clqrr" [b8b19033-c661-46bf-ae36-ead1222d46bc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-clqrr" [b8b19033-c661-46bf-ae36-ead1222d46bc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.098351196s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31209
functional_test.go:1675: http://192.168.49.2:31209: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-clqrr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31209
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [b996fc5e-e603-44f8-8e02-8e0983d91c4f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004577069s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-369865 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-369865 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-369865 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369865 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [beabf202-26a9-4c4f-9c97-ceb5bed6932a] Pending
helpers_test.go:345: "sp-pod" [beabf202-26a9-4c4f-9c97-ceb5bed6932a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [beabf202-26a9-4c4f-9c97-ceb5bed6932a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.029564169s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-369865 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-369865 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-369865 delete -f testdata/storage-provisioner/pod.yaml: (1.856398725s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369865 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [26dd7662-9541-451e-90ab-16decf224f80] Pending
helpers_test.go:345: "sp-pod" [26dd7662-9541-451e-90ab-16decf224f80] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [26dd7662-9541-451e-90ab-16decf224f80] Running
2024/08/31 22:23:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.002909498s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-369865 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh -n functional-369865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cp functional-369865:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2933196449/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh -n functional-369865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh -n functional-369865 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-369865 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:345: "mysql-6cdb49bbb-ktst4" [8081fd4f-9fa0-4ce0-93b2-6e605790840c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:345: "mysql-6cdb49bbb-ktst4" [8081fd4f-9fa0-4ce0-93b2-6e605790840c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.016780468s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;": exit status 1 (225.291521ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;": exit status 1 (143.960903ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;": exit status 1 (121.267443ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-369865 exec mysql-6cdb49bbb-ktst4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/19777/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /etc/test/nested/copy/19777/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/19777.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /etc/ssl/certs/19777.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/19777.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /usr/share/ca-certificates/19777.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/197772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /etc/ssl/certs/197772.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/197772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /usr/share/ca-certificates/197772.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-369865 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh "sudo systemctl is-active crio": exit status 1 (281.295347ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-369865 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-369865 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-6b9f76b5c7-wf24x" [50fb8e6d-0ae3-4137-bfba-1c73069b8b63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-6b9f76b5c7-wf24x" [50fb8e6d-0ae3-4137-bfba-1c73069b8b63] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004708684s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr] ...
helpers_test.go:509: unable to kill pid 69025: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-369865 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:345: "nginx-svc" [36a3fc0f-2c7a-434b-905f-de288e9832f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx-svc" [36a3fc0f-2c7a-434b-905f-de288e9832f2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002995799s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369865 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-369865
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-369865
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369865 image ls --format short --alsologtostderr:
I0831 22:23:11.211791   77040 out.go:345] Setting OutFile to fd 1 ...
I0831 22:23:11.212021   77040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.212034   77040 out.go:358] Setting ErrFile to fd 2...
I0831 22:23:11.212040   77040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.212412   77040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
I0831 22:23:11.213286   77040 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.213387   77040 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.214053   77040 cli_runner.go:164] Run: docker container inspect functional-369865 --format={{.State.Status}}
I0831 22:23:11.231901   77040 ssh_runner.go:195] Run: systemctl --version
I0831 22:23:11.231967   77040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369865
I0831 22:23:11.250456   77040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/functional-369865/id_rsa Username:docker}
I0831 22:23:11.333585   77040 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369865 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kicbase/echo-server               | functional-369865 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-369865 | d08fc6c53ee44 | 30B    |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369865 image ls --format table --alsologtostderr:
I0831 22:23:11.826343   77188 out.go:345] Setting OutFile to fd 1 ...
I0831 22:23:11.826455   77188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.826464   77188 out.go:358] Setting ErrFile to fd 2...
I0831 22:23:11.826469   77188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.826659   77188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
I0831 22:23:11.827225   77188 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.827337   77188 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.827732   77188 cli_runner.go:164] Run: docker container inspect functional-369865 --format={{.State.Status}}
I0831 22:23:11.844722   77188 ssh_runner.go:195] Run: systemctl --version
I0831 22:23:11.844781   77188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369865
I0831 22:23:11.866845   77188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/functional-369865/id_rsa Username:docker}
I0831 22:23:11.958450   77188 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369865 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"d08fc6c53ee441d8f9230c07a39e9d98d74abfec3e301022f63bacddfae3fc31","repoDigests":[],
"repoTags":["docker.io/library/minikube-local-cache-test:functional-369865"],"size":"30"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-369865"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/k
ubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369865 image ls --format json --alsologtostderr:
I0831 22:23:11.618777   77142 out.go:345] Setting OutFile to fd 1 ...
I0831 22:23:11.619186   77142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.619200   77142 out.go:358] Setting ErrFile to fd 2...
I0831 22:23:11.619209   77142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.619658   77142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
I0831 22:23:11.620833   77142 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.621156   77142 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.621892   77142 cli_runner.go:164] Run: docker container inspect functional-369865 --format={{.State.Status}}
I0831 22:23:11.639278   77142 ssh_runner.go:195] Run: systemctl --version
I0831 22:23:11.639321   77142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369865
I0831 22:23:11.656845   77142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/functional-369865/id_rsa Username:docker}
I0831 22:23:11.749343   77142 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369865 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-369865
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d08fc6c53ee441d8f9230c07a39e9d98d74abfec3e301022f63bacddfae3fc31
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-369865
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369865 image ls --format yaml --alsologtostderr:
I0831 22:23:11.404816   77090 out.go:345] Setting OutFile to fd 1 ...
I0831 22:23:11.404938   77090 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.404948   77090 out.go:358] Setting ErrFile to fd 2...
I0831 22:23:11.404952   77090 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:11.405231   77090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
I0831 22:23:11.406055   77090 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.406195   77090 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:11.406725   77090 cli_runner.go:164] Run: docker container inspect functional-369865 --format={{.State.Status}}
I0831 22:23:11.428426   77090 ssh_runner.go:195] Run: systemctl --version
I0831 22:23:11.428504   77090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369865
I0831 22:23:11.448865   77090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/functional-369865/id_rsa Username:docker}
I0831 22:23:11.538239   77090 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh pgrep buildkitd: exit status 1 (267.037025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image build -t localhost/my-image:functional-369865 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-369865 image build -t localhost/my-image:functional-369865 testdata/build --alsologtostderr: (4.478544391s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369865 image build -t localhost/my-image:functional-369865 testdata/build --alsologtostderr:
I0831 22:23:12.310323   77336 out.go:345] Setting OutFile to fd 1 ...
I0831 22:23:12.310453   77336 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:12.310463   77336 out.go:358] Setting ErrFile to fd 2...
I0831 22:23:12.310467   77336 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:23:12.310653   77336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
I0831 22:23:12.311192   77336 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:12.311772   77336 config.go:182] Loaded profile config "functional-369865": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:23:12.312147   77336 cli_runner.go:164] Run: docker container inspect functional-369865 --format={{.State.Status}}
I0831 22:23:12.328697   77336 ssh_runner.go:195] Run: systemctl --version
I0831 22:23:12.328736   77336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369865
I0831 22:23:12.347605   77336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/functional-369865/id_rsa Username:docker}
I0831 22:23:12.453010   77336 build_images.go:161] Building image from path: /tmp/build.1878743318.tar
I0831 22:23:12.453103   77336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 22:23:12.463432   77336 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1878743318.tar
I0831 22:23:12.467112   77336 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1878743318.tar: stat -c "%s %y" /var/lib/minikube/build/build.1878743318.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1878743318.tar': No such file or directory
I0831 22:23:12.467146   77336 ssh_runner.go:362] scp /tmp/build.1878743318.tar --> /var/lib/minikube/build/build.1878743318.tar (3072 bytes)
I0831 22:23:12.493395   77336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1878743318
I0831 22:23:12.503703   77336 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1878743318 -xf /var/lib/minikube/build/build.1878743318.tar
I0831 22:23:12.550648   77336 docker.go:360] Building image: /var/lib/minikube/build/build.1878743318
I0831 22:23:12.550732   77336 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-369865 /var/lib/minikube/build/build.1878743318
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.0s done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d841961f9bd527ba110a9d1f8c15fa4f353e8b04c622d8b04de217eaeaa74311 done
#8 naming to localhost/my-image:functional-369865 done
#8 DONE 0.0s
I0831 22:23:16.717600   77336 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-369865 /var/lib/minikube/build/build.1878743318: (4.166843719s)
I0831 22:23:16.717668   77336 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1878743318
I0831 22:23:16.725831   77336 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1878743318.tar
I0831 22:23:16.733310   77336 build_images.go:217] Built localhost/my-image:functional-369865 from /tmp/build.1878743318.tar
I0831 22:23:16.733336   77336 build_images.go:133] succeeded building to: functional-369865
I0831 22:23:16.733341   77336 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.825904239s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-369865
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image load --daemon kicbase/echo-server:functional-369865 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image load --daemon kicbase/echo-server:functional-369865 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-369865
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image load --daemon kicbase/echo-server:functional-369865 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image save kicbase/echo-server:functional-369865 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image rm kicbase/echo-server:functional-369865 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-369865
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 image save --daemon kicbase/echo-server:functional-369865 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-369865
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service list -o json
functional_test.go:1494: Took "329.795119ms" to run "out/minikube-linux-amd64 -p functional-369865 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31539
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "326.97951ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.726829ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "363.011402ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.44368ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31539
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-369865 docker-env) && out/minikube-linux-amd64 status -p functional-369865"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-369865 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdany-port2372175766/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725142971606497430" to /tmp/TestFunctionalparallelMountCmdany-port2372175766/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725142971606497430" to /tmp/TestFunctionalparallelMountCmdany-port2372175766/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725142971606497430" to /tmp/TestFunctionalparallelMountCmdany-port2372175766/001/test-1725142971606497430
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.436641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 test-1725142971606497430
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh cat /mount-9p/test-1725142971606497430
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-369865 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [9416de5c-94e7-4a47-94aa-9cdf300db35f] Pending
helpers_test.go:345: "busybox-mount" [9416de5c-94e7-4a47-94aa-9cdf300db35f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [9416de5c-94e7-4a47-94aa-9cdf300db35f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [9416de5c-94e7-4a47-94aa-9cdf300db35f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003129651s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-369865 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdany-port2372175766/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-369865 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.132.222 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-369865 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdspecific-port2927496661/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.97461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdspecific-port2927496661/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh "sudo umount -f /mount-9p": exit status 1 (278.843658ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-369865 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdspecific-port2927496661/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T" /mount1: exit status 1 (356.444336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369865 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-369865 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1612766229/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-369865
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-369865
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-369865
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-555577 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0831 22:24:51.334780   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.341656   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.353056   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.374445   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.415839   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.497246   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.658658   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.980334   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:52.622387   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:53.904208   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:56.466166   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:01.587965   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-555577 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m35.695724419s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-555577 -- rollout status deployment/busybox: (3.567777454s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-cslsd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-nq7tv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-q277k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-cslsd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-nq7tv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-q277k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-cslsd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-nq7tv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-q277k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-cslsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-cslsd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-nq7tv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-nq7tv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-q277k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-555577 -- exec busybox-7dff88458-q277k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-555577 -v=7 --alsologtostderr
E0831 22:25:11.829470   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:32.311621   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-555577 -v=7 --alsologtostderr: (22.584821003s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-555577 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status --output json -v=7 --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp testdata/cp-test.txt ha-555577:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438342887/001/cp-test_ha-555577.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577:/home/docker/cp-test.txt ha-555577-m02:/home/docker/cp-test_ha-555577_ha-555577-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test_ha-555577_ha-555577-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577:/home/docker/cp-test.txt ha-555577-m03:/home/docker/cp-test_ha-555577_ha-555577-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test_ha-555577_ha-555577-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577:/home/docker/cp-test.txt ha-555577-m04:/home/docker/cp-test_ha-555577_ha-555577-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test_ha-555577_ha-555577-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp testdata/cp-test.txt ha-555577-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438342887/001/cp-test_ha-555577-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m02:/home/docker/cp-test.txt ha-555577:/home/docker/cp-test_ha-555577-m02_ha-555577.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test_ha-555577-m02_ha-555577.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m02:/home/docker/cp-test.txt ha-555577-m03:/home/docker/cp-test_ha-555577-m02_ha-555577-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test_ha-555577-m02_ha-555577-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m02:/home/docker/cp-test.txt ha-555577-m04:/home/docker/cp-test_ha-555577-m02_ha-555577-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test_ha-555577-m02_ha-555577-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp testdata/cp-test.txt ha-555577-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438342887/001/cp-test_ha-555577-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m03:/home/docker/cp-test.txt ha-555577:/home/docker/cp-test_ha-555577-m03_ha-555577.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test_ha-555577-m03_ha-555577.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m03:/home/docker/cp-test.txt ha-555577-m02:/home/docker/cp-test_ha-555577-m03_ha-555577-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test_ha-555577-m03_ha-555577-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m03:/home/docker/cp-test.txt ha-555577-m04:/home/docker/cp-test_ha-555577-m03_ha-555577-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test_ha-555577-m03_ha-555577-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp testdata/cp-test.txt ha-555577-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1438342887/001/cp-test_ha-555577-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m04:/home/docker/cp-test.txt ha-555577:/home/docker/cp-test_ha-555577-m04_ha-555577.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577 "sudo cat /home/docker/cp-test_ha-555577-m04_ha-555577.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m04:/home/docker/cp-test.txt ha-555577-m02:/home/docker/cp-test_ha-555577-m04_ha-555577-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m02 "sudo cat /home/docker/cp-test_ha-555577-m04_ha-555577-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 cp ha-555577-m04:/home/docker/cp-test.txt ha-555577-m03:/home/docker/cp-test_ha-555577-m04_ha-555577-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 ssh -n ha-555577-m03 "sudo cat /home/docker/cp-test_ha-555577-m04_ha-555577-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-555577 node stop m02 -v=7 --alsologtostderr: (10.730653237s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr: exit status 7 (630.538869ms)

                                                
                                                
-- stdout --
	ha-555577
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555577-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555577-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:26:00.702654  104995 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:26:00.702916  104995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:00.702926  104995 out.go:358] Setting ErrFile to fd 2...
	I0831 22:26:00.702930  104995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:00.703126  104995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:26:00.703274  104995 out.go:352] Setting JSON to false
	I0831 22:26:00.703298  104995 mustload.go:65] Loading cluster: ha-555577
	I0831 22:26:00.703400  104995 notify.go:220] Checking for updates...
	I0831 22:26:00.703660  104995 config.go:182] Loaded profile config "ha-555577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:26:00.703679  104995 status.go:255] checking status of ha-555577 ...
	I0831 22:26:00.704102  104995 cli_runner.go:164] Run: docker container inspect ha-555577 --format={{.State.Status}}
	I0831 22:26:00.721748  104995 status.go:330] ha-555577 host status = "Running" (err=<nil>)
	I0831 22:26:00.721788  104995 host.go:66] Checking if "ha-555577" exists ...
	I0831 22:26:00.722050  104995 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-555577")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555577
	I0831 22:26:00.739678  104995 host.go:66] Checking if "ha-555577" exists ...
	I0831 22:26:00.739914  104995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:26:00.739964  104995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555577
	I0831 22:26:00.757403  104995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/ha-555577/id_rsa Username:docker}
	I0831 22:26:00.847455  104995 ssh_runner.go:195] Run: systemctl --version
	I0831 22:26:00.851402  104995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:26:00.861780  104995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:26:00.911121  104995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-31 22:26:00.90194299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:26:00.911674  104995 kubeconfig.go:125] found "ha-555577" server: "https://192.168.49.254:8443"
	I0831 22:26:00.911704  104995 api_server.go:166] Checking apiserver status ...
	I0831 22:26:00.911742  104995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:26:00.922960  104995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2395/cgroup
	I0831 22:26:00.931547  104995 api_server.go:182] apiserver freezer: "3:freezer:/docker/02eb01b91089676545122cf0ae89e12897ccce112bf814cf19016a54ca9f6086/kubepods/burstable/pod1ae00368cffe48c7f793a4d6306c7340/9a8a7fdbb04171c724533f194ade5f53f80c3701d64cacc0d86766278e2bf39f"
	I0831 22:26:00.931619  104995 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/02eb01b91089676545122cf0ae89e12897ccce112bf814cf19016a54ca9f6086/kubepods/burstable/pod1ae00368cffe48c7f793a4d6306c7340/9a8a7fdbb04171c724533f194ade5f53f80c3701d64cacc0d86766278e2bf39f/freezer.state
	I0831 22:26:00.939233  104995 api_server.go:204] freezer state: "THAWED"
	I0831 22:26:00.939262  104995 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:26:00.942867  104995 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:26:00.942887  104995 status.go:422] ha-555577 apiserver status = Running (err=<nil>)
	I0831 22:26:00.942897  104995 status.go:257] ha-555577 status: &{Name:ha-555577 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:26:00.942915  104995 status.go:255] checking status of ha-555577-m02 ...
	I0831 22:26:00.943137  104995 cli_runner.go:164] Run: docker container inspect ha-555577-m02 --format={{.State.Status}}
	I0831 22:26:00.960771  104995 status.go:330] ha-555577-m02 host status = "Stopped" (err=<nil>)
	I0831 22:26:00.960792  104995 status.go:343] host is not running, skipping remaining checks
	I0831 22:26:00.960800  104995 status.go:257] ha-555577-m02 status: &{Name:ha-555577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:26:00.960826  104995 status.go:255] checking status of ha-555577-m03 ...
	I0831 22:26:00.961124  104995 cli_runner.go:164] Run: docker container inspect ha-555577-m03 --format={{.State.Status}}
	I0831 22:26:00.978192  104995 status.go:330] ha-555577-m03 host status = "Running" (err=<nil>)
	I0831 22:26:00.978216  104995 host.go:66] Checking if "ha-555577-m03" exists ...
	I0831 22:26:00.978487  104995 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-555577")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555577-m03
	I0831 22:26:00.995408  104995 host.go:66] Checking if "ha-555577-m03" exists ...
	I0831 22:26:00.995826  104995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:26:00.995865  104995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555577-m03
	I0831 22:26:01.013584  104995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/ha-555577-m03/id_rsa Username:docker}
	I0831 22:26:01.098031  104995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:26:01.108935  104995 kubeconfig.go:125] found "ha-555577" server: "https://192.168.49.254:8443"
	I0831 22:26:01.108967  104995 api_server.go:166] Checking apiserver status ...
	I0831 22:26:01.109005  104995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:26:01.118917  104995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2245/cgroup
	I0831 22:26:01.127200  104995 api_server.go:182] apiserver freezer: "3:freezer:/docker/4d86ed5ffcdfc94ab9c092ffc76116e6677d0703fe52391023c185770277068b/kubepods/burstable/pod2daea0b91c4d401455d703f64b867d78/62dcd58bbdb6fef360c31e6eaa19211485a117c8c683711a2035d2766c2cfb43"
	I0831 22:26:01.127285  104995 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4d86ed5ffcdfc94ab9c092ffc76116e6677d0703fe52391023c185770277068b/kubepods/burstable/pod2daea0b91c4d401455d703f64b867d78/62dcd58bbdb6fef360c31e6eaa19211485a117c8c683711a2035d2766c2cfb43/freezer.state
	I0831 22:26:01.134533  104995 api_server.go:204] freezer state: "THAWED"
	I0831 22:26:01.134559  104995 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:26:01.138045  104995 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:26:01.138070  104995 status.go:422] ha-555577-m03 apiserver status = Running (err=<nil>)
	I0831 22:26:01.138081  104995 status.go:257] ha-555577-m03 status: &{Name:ha-555577-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:26:01.138098  104995 status.go:255] checking status of ha-555577-m04 ...
	I0831 22:26:01.138314  104995 cli_runner.go:164] Run: docker container inspect ha-555577-m04 --format={{.State.Status}}
	I0831 22:26:01.155037  104995 status.go:330] ha-555577-m04 host status = "Running" (err=<nil>)
	I0831 22:26:01.155059  104995 host.go:66] Checking if "ha-555577-m04" exists ...
	I0831 22:26:01.155386  104995 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-555577")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555577-m04
	I0831 22:26:01.178170  104995 host.go:66] Checking if "ha-555577-m04" exists ...
	I0831 22:26:01.178404  104995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:26:01.178443  104995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555577-m04
	I0831 22:26:01.195791  104995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/ha-555577-m04/id_rsa Username:docker}
	I0831 22:26:01.281847  104995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:26:01.291858  104995 status.go:257] ha-555577-m04 status: &{Name:ha-555577-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 node start m02 -v=7 --alsologtostderr
E0831 22:26:13.273357   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-555577 node start m02 -v=7 --alsologtostderr: (20.140367551s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr: (1.236668297s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.31731685s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (211.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-555577 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-555577 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-555577 -v=7 --alsologtostderr: (33.642682095s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-555577 --wait=true -v=7 --alsologtostderr
E0831 22:27:35.195162   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.468942   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.475280   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.486605   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.507959   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.549320   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.630704   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:40.792953   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:41.114702   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:41.756871   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:43.039117   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:45.600591   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:50.722775   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:28:00.964570   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:28:21.446870   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:02.409080   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:51.335397   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-555577 --wait=true -v=7 --alsologtostderr: (2m58.07190043s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-555577
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (211.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-555577 node delete m03 -v=7 --alsologtostderr: (8.510848059s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 stop -v=7 --alsologtostderr
E0831 22:30:19.036672   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:30:24.330427   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-555577 stop -v=7 --alsologtostderr: (32.376800379s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr: exit status 7 (96.642997ms)

                                                
                                                
-- stdout --
	ha-555577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555577-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:30:40.440356  135316 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:30:40.440559  135316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:30:40.440570  135316 out.go:358] Setting ErrFile to fd 2...
	I0831 22:30:40.440574  135316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:30:40.440786  135316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:30:40.440982  135316 out.go:352] Setting JSON to false
	I0831 22:30:40.441009  135316 mustload.go:65] Loading cluster: ha-555577
	I0831 22:30:40.441043  135316 notify.go:220] Checking for updates...
	I0831 22:30:40.441492  135316 config.go:182] Loaded profile config "ha-555577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:30:40.441508  135316 status.go:255] checking status of ha-555577 ...
	I0831 22:30:40.441981  135316 cli_runner.go:164] Run: docker container inspect ha-555577 --format={{.State.Status}}
	I0831 22:30:40.460756  135316 status.go:330] ha-555577 host status = "Stopped" (err=<nil>)
	I0831 22:30:40.460778  135316 status.go:343] host is not running, skipping remaining checks
	I0831 22:30:40.460785  135316 status.go:257] ha-555577 status: &{Name:ha-555577 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:30:40.460807  135316 status.go:255] checking status of ha-555577-m02 ...
	I0831 22:30:40.461033  135316 cli_runner.go:164] Run: docker container inspect ha-555577-m02 --format={{.State.Status}}
	I0831 22:30:40.477096  135316 status.go:330] ha-555577-m02 host status = "Stopped" (err=<nil>)
	I0831 22:30:40.477130  135316 status.go:343] host is not running, skipping remaining checks
	I0831 22:30:40.477140  135316 status.go:257] ha-555577-m02 status: &{Name:ha-555577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:30:40.477167  135316 status.go:255] checking status of ha-555577-m04 ...
	I0831 22:30:40.477438  135316 cli_runner.go:164] Run: docker container inspect ha-555577-m04 --format={{.State.Status}}
	I0831 22:30:40.493577  135316 status.go:330] ha-555577-m04 host status = "Stopped" (err=<nil>)
	I0831 22:30:40.493597  135316 status.go:343] host is not running, skipping remaining checks
	I0831 22:30:40.493603  135316 status.go:257] ha-555577-m04 status: &{Name:ha-555577-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-555577 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-555577 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.299038571s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-555577 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-555577 --control-plane -v=7 --alsologtostderr: (33.994643794s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-555577 status -v=7 --alsologtostderr
E0831 22:32:40.468729   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-011830 --driver=docker  --container-runtime=docker
E0831 22:33:08.171890   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-011830 --driver=docker  --container-runtime=docker: (21.512335055s)
--- PASS: TestImageBuild/serial/Setup (21.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-011830
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-011830: (2.675562193s)
--- PASS: TestImageBuild/serial/NormalBuild (2.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-011830
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-011830
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-011830
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-080067 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-080067 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (38.723577959s)
--- PASS: TestJSONOutput/start/Command (38.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-080067 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-080067 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-080067 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-080067 --output=json --user=testUser: (5.717898673s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-091781 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-091781 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.734685ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"657ced45-d0d2-4cc4-83d6-c42b4dc7714e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-091781] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8f60e42-2cbc-49a5-8475-f3a5891954de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"7389ae15-e69b-4a22-aa5e-02946bb3f081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b86e03eb-eed7-46d9-b52b-6c0b37af6145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig"}}
	{"specversion":"1.0","id":"25ebd822-9d6a-4f02-b2d8-b3233084132e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube"}}
	{"specversion":"1.0","id":"19ff653a-8cfb-4bbf-845c-43602aca287b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b2272d7c-c615-450e-a7e3-755c2687ad55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8682cf61-2419-4f95-8a79-5e57521f191f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-091781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-091781
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-099273 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-099273 --network=: (21.327173702s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-099273" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-099273
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-099273: (1.955563041s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-203953 --network=bridge
E0831 22:34:51.335044   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-203953 --network=bridge: (23.849339472s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-203953" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-203953
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-203953: (1.814551564s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.68s)

                                                
                                    
x
+
TestKicExistingNetwork (22.47s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-804796 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-804796 --network=existing-network: (20.829032664s)
helpers_test.go:176: Cleaning up "existing-network-804796" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-804796
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-804796: (1.503717074s)
--- PASS: TestKicExistingNetwork (22.47s)

                                                
                                    
x
+
TestKicCustomSubnet (22.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-323578 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-323578 --subnet=192.168.60.0/24: (20.408822368s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-323578 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-323578" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-323578
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-323578: (2.034427017s)
--- PASS: TestKicCustomSubnet (22.46s)

                                                
                                    
x
+
TestKicStaticIP (25.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-192482 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-192482 --static-ip=192.168.200.200: (23.740341175s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-192482 ip
helpers_test.go:176: Cleaning up "static-ip-192482" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-192482
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-192482: (1.979282627s)
--- PASS: TestKicStaticIP (25.84s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (53.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-637821 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-637821 --driver=docker  --container-runtime=docker: (24.227375161s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-641304 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-641304 --driver=docker  --container-runtime=docker: (24.029302113s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-637821
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-641304
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-641304" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-641304
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-641304: (1.925388212s)
helpers_test.go:176: Cleaning up "first-637821" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-637821
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-637821: (1.99177482s)
--- PASS: TestMinikubeProfile (53.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-225989 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-225989 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.997564098s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-225989 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-237303 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-237303 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.233112551s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-237303 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-225989 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-225989 --alsologtostderr -v=5: (1.442682997s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-237303 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-237303
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-237303: (1.164691051s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-237303
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-237303: (7.622859041s)
--- PASS: TestMountStart/serial/RestartStopped (8.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-237303 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/CreateExtnet (0.06s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/CreateExtnet
multinetwork_test.go:99: (dbg) Run:  docker network create network-extnet-931399
multinetwork_test.go:104: external network network-extnet-931399 created
--- PASS: TestContainerIPsMultiNetwork/serial/CreateExtnet (0.06s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/FreshStart (61.2s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/FreshStart
multinetwork_test.go:148: (dbg) Run:  out/minikube-linux-amd64 start -p extnet-923330 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0831 22:37:40.469367   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
multinetwork_test.go:148: (dbg) Done: out/minikube-linux-amd64 start -p extnet-923330 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m1.177601553s)
multinetwork_test.go:161: cluster extnet-923330 started with address 192.168.67.2/
--- PASS: TestContainerIPsMultiNetwork/serial/FreshStart (61.20s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/ConnectExtnet
multinetwork_test.go:113: (dbg) Run:  docker network connect network-extnet-931399 extnet-923330
multinetwork_test.go:126: cluster extnet-923330 was attached to network network-extnet-931399 with address 172.18.0.2/
--- PASS: TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.11s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Stop
multinetwork_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p extnet-923330 --alsologtostderr -v=5
multinetwork_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p extnet-923330 --alsologtostderr -v=5: (10.897202313s)
--- PASS: TestContainerIPsMultiNetwork/serial/Stop (10.90s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyStatus (0.12s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-amd64 status -p extnet-923330 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p extnet-923330 --output=json --layout=cluster: exit status 7 (120.867857ms)

                                                
                                                
-- stdout --
	{"Name":"extnet-923330","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* 1 node stopped.","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":405,"StatusName":"Stopped"}},"Nodes":[{"Name":"extnet-923330","StatusCode":405,"StatusName":"Stopped","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyStatus (0.12s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Start (12.19s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Start
multinetwork_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p extnet-923330 --alsologtostderr -v=5
multinetwork_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p extnet-923330 --alsologtostderr -v=5: (12.156490303s)
--- PASS: TestContainerIPsMultiNetwork/serial/Start (12.19s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyNetworks
multinetwork_test.go:225: (dbg) Run:  docker inspect extnet-923330
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Delete (2.27s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Delete
multinetwork_test.go:253: (dbg) Run:  out/minikube-linux-amd64 delete -p extnet-923330 --alsologtostderr -v=5
multinetwork_test.go:253: (dbg) Done: out/minikube-linux-amd64 delete -p extnet-923330 --alsologtostderr -v=5: (2.26862457s)
--- PASS: TestContainerIPsMultiNetwork/serial/Delete (2.27s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/DeleteExtnet
multinetwork_test.go:136: (dbg) Run:  docker network rm network-extnet-931399
multinetwork_test.go:140: external network network-extnet-931399 deleted
--- PASS: TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.11s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.1s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyDeletedResources
multinetwork_test.go:263: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
multinetwork_test.go:289: (dbg) Run:  docker ps -a
multinetwork_test.go:294: (dbg) Run:  docker volume inspect extnet-923330
multinetwork_test.go:294: (dbg) Non-zero exit: docker volume inspect extnet-923330: exit status 1 (15.001597ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get extnet-923330: no such volume

                                                
                                                
** /stderr **
multinetwork_test.go:299: (dbg) Run:  docker network ls
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271919 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0831 22:39:51.335079   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271919 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m13.064578073s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (38.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-271919 -- rollout status deployment/busybox: (3.912025526s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-6l46j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-lmln9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-6l46j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-lmln9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-6l46j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-lmln9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (38.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-6l46j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-6l46j -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-lmln9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271919 -- exec busybox-7dff88458-lmln9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-271919 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-271919 -v 3 --alsologtostderr: (17.731200907s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-271919 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --output json --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp testdata/cp-test.txt multinode-271919:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1007050359/001/cp-test_multinode-271919.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919:/home/docker/cp-test.txt multinode-271919-m02:/home/docker/cp-test_multinode-271919_multinode-271919-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test_multinode-271919_multinode-271919-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919:/home/docker/cp-test.txt multinode-271919-m03:/home/docker/cp-test_multinode-271919_multinode-271919-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test_multinode-271919_multinode-271919-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp testdata/cp-test.txt multinode-271919-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1007050359/001/cp-test_multinode-271919-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m02:/home/docker/cp-test.txt multinode-271919:/home/docker/cp-test_multinode-271919-m02_multinode-271919.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test_multinode-271919-m02_multinode-271919.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m02:/home/docker/cp-test.txt multinode-271919-m03:/home/docker/cp-test_multinode-271919-m02_multinode-271919-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test_multinode-271919-m02_multinode-271919-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp testdata/cp-test.txt multinode-271919-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1007050359/001/cp-test_multinode-271919-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m03:/home/docker/cp-test.txt multinode-271919:/home/docker/cp-test_multinode-271919-m03_multinode-271919.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919 "sudo cat /home/docker/cp-test_multinode-271919-m03_multinode-271919.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 cp multinode-271919-m03:/home/docker/cp-test.txt multinode-271919-m02:/home/docker/cp-test_multinode-271919-m03_multinode-271919-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 ssh -n multinode-271919-m02 "sudo cat /home/docker/cp-test_multinode-271919-m03_multinode-271919-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 node stop m03
E0831 22:41:14.398848   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-271919 node stop m03: (1.168543691s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271919 status: exit status 7 (426.256055ms)

                                                
                                                
-- stdout --
	multinode-271919
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271919-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271919-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr: exit status 7 (440.983909ms)

                                                
                                                
-- stdout --
	multinode-271919
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271919-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271919-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:41:14.946315  231641 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:41:14.946415  231641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:41:14.946422  231641 out.go:358] Setting ErrFile to fd 2...
	I0831 22:41:14.946426  231641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:41:14.946585  231641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:41:14.946738  231641 out.go:352] Setting JSON to false
	I0831 22:41:14.946764  231641 mustload.go:65] Loading cluster: multinode-271919
	I0831 22:41:14.946876  231641 notify.go:220] Checking for updates...
	I0831 22:41:14.947143  231641 config.go:182] Loaded profile config "multinode-271919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:41:14.947159  231641 status.go:255] checking status of multinode-271919 ...
	I0831 22:41:14.947601  231641 cli_runner.go:164] Run: docker container inspect multinode-271919 --format={{.State.Status}}
	I0831 22:41:14.966510  231641 status.go:330] multinode-271919 host status = "Running" (err=<nil>)
	I0831 22:41:14.966536  231641 host.go:66] Checking if "multinode-271919" exists ...
	I0831 22:41:14.966754  231641 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-271919")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271919
	I0831 22:41:14.983040  231641 host.go:66] Checking if "multinode-271919" exists ...
	I0831 22:41:14.983340  231641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:41:14.983399  231641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271919
	I0831 22:41:14.999695  231641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32930 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/multinode-271919/id_rsa Username:docker}
	I0831 22:41:15.085878  231641 ssh_runner.go:195] Run: systemctl --version
	I0831 22:41:15.089720  231641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:41:15.099673  231641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:41:15.145793  231641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-31 22:41:15.136598152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0831 22:41:15.146402  231641 kubeconfig.go:125] found "multinode-271919" server: "https://192.168.67.2:8443"
	I0831 22:41:15.146442  231641 api_server.go:166] Checking apiserver status ...
	I0831 22:41:15.146496  231641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:41:15.157015  231641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2390/cgroup
	I0831 22:41:15.165361  231641 api_server.go:182] apiserver freezer: "3:freezer:/docker/445b1a5fdf969567922bd49e2b0c25f59ec0d9033d7e7f12b47fd3b09c58ae3a/kubepods/burstable/pod2a751c9ea0d84d6e4b0db0418fa99c43/3e5d6cd397e25873982cce88bf6988dd37659944591308a49c19491c787eb3c0"
	I0831 22:41:15.165430  231641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/445b1a5fdf969567922bd49e2b0c25f59ec0d9033d7e7f12b47fd3b09c58ae3a/kubepods/burstable/pod2a751c9ea0d84d6e4b0db0418fa99c43/3e5d6cd397e25873982cce88bf6988dd37659944591308a49c19491c787eb3c0/freezer.state
	I0831 22:41:15.173339  231641 api_server.go:204] freezer state: "THAWED"
	I0831 22:41:15.173367  231641 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0831 22:41:15.178369  231641 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0831 22:41:15.178397  231641 status.go:422] multinode-271919 apiserver status = Running (err=<nil>)
	I0831 22:41:15.178410  231641 status.go:257] multinode-271919 status: &{Name:multinode-271919 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:41:15.178431  231641 status.go:255] checking status of multinode-271919-m02 ...
	I0831 22:41:15.178745  231641 cli_runner.go:164] Run: docker container inspect multinode-271919-m02 --format={{.State.Status}}
	I0831 22:41:15.195989  231641 status.go:330] multinode-271919-m02 host status = "Running" (err=<nil>)
	I0831 22:41:15.196010  231641 host.go:66] Checking if "multinode-271919-m02" exists ...
	I0831 22:41:15.196323  231641 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-271919")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271919-m02
	I0831 22:41:15.213151  231641 host.go:66] Checking if "multinode-271919-m02" exists ...
	I0831 22:41:15.213463  231641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:41:15.213502  231641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271919-m02
	I0831 22:41:15.229908  231641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32935 SSHKeyPath:/home/jenkins/minikube-integration/18943-12963/.minikube/machines/multinode-271919-m02/id_rsa Username:docker}
	I0831 22:41:15.317882  231641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:41:15.328265  231641 status.go:257] multinode-271919-m02 status: &{Name:multinode-271919-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:41:15.328304  231641 status.go:255] checking status of multinode-271919-m03 ...
	I0831 22:41:15.328624  231641 cli_runner.go:164] Run: docker container inspect multinode-271919-m03 --format={{.State.Status}}
	I0831 22:41:15.345773  231641 status.go:330] multinode-271919-m03 host status = "Stopped" (err=<nil>)
	I0831 22:41:15.345793  231641 status.go:343] host is not running, skipping remaining checks
	I0831 22:41:15.345806  231641 status.go:257] multinode-271919-m03 status: &{Name:multinode-271919-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-271919 node start m03 -v=7 --alsologtostderr: (9.037772861s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271919
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-271919
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-271919: (22.225562412s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271919 --wait=true -v=8 --alsologtostderr
E0831 22:42:40.469057   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271919 --wait=true -v=8 --alsologtostderr: (1m29.222213069s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271919
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-271919 node delete m03: (4.568581035s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-271919 stop: (21.217444264s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271919 status: exit status 7 (89.267747ms)

                                                
                                                
-- stdout --
	multinode-271919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271919-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr: exit status 7 (78.112075ms)

                                                
                                                
-- stdout --
	multinode-271919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271919-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:43:43.006850  247132 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:43:43.006939  247132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:43:43.006946  247132 out.go:358] Setting ErrFile to fd 2...
	I0831 22:43:43.006950  247132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:43:43.007127  247132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-12963/.minikube/bin
	I0831 22:43:43.007270  247132 out.go:352] Setting JSON to false
	I0831 22:43:43.007295  247132 mustload.go:65] Loading cluster: multinode-271919
	I0831 22:43:43.007400  247132 notify.go:220] Checking for updates...
	I0831 22:43:43.007672  247132 config.go:182] Loaded profile config "multinode-271919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:43:43.007686  247132 status.go:255] checking status of multinode-271919 ...
	I0831 22:43:43.008040  247132 cli_runner.go:164] Run: docker container inspect multinode-271919 --format={{.State.Status}}
	I0831 22:43:43.025104  247132 status.go:330] multinode-271919 host status = "Stopped" (err=<nil>)
	I0831 22:43:43.025146  247132 status.go:343] host is not running, skipping remaining checks
	I0831 22:43:43.025160  247132 status.go:257] multinode-271919 status: &{Name:multinode-271919 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:43:43.025196  247132 status.go:255] checking status of multinode-271919-m02 ...
	I0831 22:43:43.025590  247132 cli_runner.go:164] Run: docker container inspect multinode-271919-m02 --format={{.State.Status}}
	I0831 22:43:43.041187  247132 status.go:330] multinode-271919-m02 host status = "Stopped" (err=<nil>)
	I0831 22:43:43.041241  247132 status.go:343] host is not running, skipping remaining checks
	I0831 22:43:43.041249  247132 status.go:257] multinode-271919-m02 status: &{Name:multinode-271919-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271919 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0831 22:44:03.533896   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271919 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.140499876s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271919 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271919
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271919-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-271919-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.29366ms)

                                                
                                                
-- stdout --
	* [multinode-271919-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-271919-m02' is duplicated with machine name 'multinode-271919-m02' in profile 'multinode-271919'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271919-m03 --driver=docker  --container-runtime=docker
E0831 22:44:51.334657   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271919-m03 --driver=docker  --container-runtime=docker: (23.73963674s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-271919
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-271919: exit status 80 (256.850128ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-271919 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-271919-m03 already exists in multinode-271919-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-271919-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-271919-m03: (1.960805554s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.06s)

                                                
                                    
x
+
TestPreload (136.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-517728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-517728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m31.740660964s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-517728 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-517728 image pull gcr.io/k8s-minikube/busybox: (1.836357117s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-517728
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-517728: (10.61737275s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-517728 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-517728 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (30.198166815s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-517728 image list
helpers_test.go:176: Cleaning up "test-preload-517728" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-517728
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-517728: (2.117962055s)
--- PASS: TestPreload (136.70s)

                                                
                                    
x
+
TestScheduledStopUnix (97.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-304213 --memory=2048 --driver=docker  --container-runtime=docker
E0831 22:47:40.469384   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-304213 --memory=2048 --driver=docker  --container-runtime=docker: (24.212953139s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-304213 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-304213 -n scheduled-stop-304213
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-304213 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-304213 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-304213 -n scheduled-stop-304213
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-304213
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-304213 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-304213
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-304213: exit status 7 (59.988017ms)

                                                
                                                
-- stdout --
	scheduled-stop-304213
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-304213 -n scheduled-stop-304213
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-304213 -n scheduled-stop-304213: exit status 7 (56.369029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-304213" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-304213
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-304213: (1.594300421s)
--- PASS: TestScheduledStopUnix (97.01s)

                                                
                                    
x
+
TestSkaffold (99.98s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2053560733 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-503448 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-503448 --memory=2600 --driver=docker  --container-runtime=docker: (20.6516719s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2053560733 run --minikube-profile skaffold-503448 --kube-context skaffold-503448 --status-check=true --port-forward=false --interactive=false
E0831 22:49:51.335123   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2053560733 run --minikube-profile skaffold-503448 --kube-context skaffold-503448 --status-check=true --port-forward=false --interactive=false: (1m2.89405086s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:345: "leeroy-app-7bdd65f4fb-qdz6t" [2812d380-ea7e-44a8-ba45-72a5a64d03be] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003804604s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:345: "leeroy-web-65ff9788d6-82l8q" [77a1b0f5-3bf0-4cf2-87d0-c2227cd0a651] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003531164s
helpers_test.go:176: Cleaning up "skaffold-503448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-503448
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-503448: (2.671778371s)
--- PASS: TestSkaffold (99.98s)

                                                
                                    
x
+
TestInsufficientStorage (12.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-203287 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-203287 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.185495751s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b7ba33ca-dfee-4fec-b270-80db107e9869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-203287] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7483f92-d96f-455a-8a93-ccbbd364fbc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"18172ffe-aaaf-4f3d-8b69-98c2d873a966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8df2232a-ba21-46c9-8705-7fbd8ea0e962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig"}}
	{"specversion":"1.0","id":"f568f349-50f7-43ed-adbf-398131c7a763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube"}}
	{"specversion":"1.0","id":"ead554f3-cb1d-4a89-a819-317dea7c1d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"35c1d884-491a-4ce9-9ec1-ed8d63567c6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8b1403f7-6319-4223-9d85-72d2e1442200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f9cf4c53-ef16-40ef-b0a3-4209e8ea9b70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1e3e0fe6-2b0c-47e2-ad35-ced5d5eaca68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc4a38c2-711f-48d0-9ee7-b1b02a2249b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"35862593-00d7-474b-9ed5-933477db303e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-203287\" primary control-plane node in \"insufficient-storage-203287\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"394ba8d7-aed6-40fb-a756-9dc9d2e1fdaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724862063-19530 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"04e5dfd1-a708-4392-8119-928329aadd1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a61b054-0aee-46b5-aa69-227e159e7725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:700: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-203287 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-203287 --output=json --layout=cluster: exit status 7 (249.179265ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203287","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203287","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:50:50.621105  286918 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-203287" does not appear in /home/jenkins/minikube-integration/18943-12963/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:700: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-203287 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-203287 --output=json --layout=cluster: exit status 7 (252.481243ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203287","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203287","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:50:50.874002  287018 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-203287" does not appear in /home/jenkins/minikube-integration/18943-12963/kubeconfig
	E0831 22:50:50.883453  287018 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/insufficient-storage-203287/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-203287" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-203287
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-203287: (1.619821659s)
--- PASS: TestInsufficientStorage (12.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3206949781 start -p running-upgrade-039413 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0831 22:52:40.469335   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3206949781 start -p running-upgrade-039413 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.137589349s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-039413 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-039413 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.469659555s)
helpers_test.go:176: Cleaning up "running-upgrade-039413" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-039413
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-039413: (2.120589974s)
--- PASS: TestRunningBinaryUpgrade (82.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (335.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.671551786s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-202216
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-202216: (1.207572307s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-202216 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-202216 status --format={{.Host}}: exit status 7 (65.477927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m36.913457218s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-202216 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (132.487081ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-202216] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-202216
	    minikube start -p kubernetes-upgrade-202216 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2022162 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-202216 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0831 22:56:07.482423   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-202216 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.122172586s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-202216" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-202216
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-202216: (2.2816522s)
--- PASS: TestKubernetesUpgrade (335.47s)

                                                
                                    
x
+
TestMissingContainerUpgrade (195.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3823928113 start -p missing-upgrade-013246 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3823928113 start -p missing-upgrade-013246 --memory=2200 --driver=docker  --container-runtime=docker: (2m8.898687108s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-013246
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-013246: (13.737579161s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-013246
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-013246 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-013246 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.984329314s)
helpers_test.go:176: Cleaning up "missing-upgrade-013246" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-013246
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-013246: (2.139178683s)
--- PASS: TestMissingContainerUpgrade (195.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2559754345 start -p stopped-upgrade-283752 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2559754345 start -p stopped-upgrade-283752 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m59.411540936s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2559754345 -p stopped-upgrade-283752 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2559754345 -p stopped-upgrade-283752 stop: (11.201929615s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-283752 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-283752 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.186993235s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-283752
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-283752: (1.07601525s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (65.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-299845 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-299845 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m5.968453811s)
--- PASS: TestPause/serial/Start (65.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (67.745412ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-453282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-12963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-12963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-453282 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-453282 --driver=docker  --container-runtime=docker: (28.203346902s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-453282 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --driver=docker  --container-runtime=docker: (14.57368661s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-453282 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-453282 status -o json: exit status 2 (282.63545ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-453282","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-453282
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-453282: (1.772699222s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-453282 --no-kubernetes --driver=docker  --container-runtime=docker: (8.363455263s)
--- PASS: TestNoKubernetes/serial/Start (8.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-453282 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-453282 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.392292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-453282
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-453282: (1.179020251s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-453282 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-453282 --driver=docker  --container-runtime=docker: (7.491102204s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-453282 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-453282 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.030505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-299845 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-299845 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.491073031s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.50s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-299845 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-299845 --alsologtostderr -v=5: (1.024660039s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-amd64 status -p pause-299845 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-299845 --output=json --layout=cluster: exit status 2 (267.323897ms)

                                                
                                                
-- stdout --
	{"Name":"pause-299845","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-299845","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-299845 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-299845 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-299845 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-299845 --alsologtostderr -v=5: (2.142733971s)
--- PASS: TestPause/serial/DeletePaused (2.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-299845
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-299845: exit status 1 (19.232163ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-299845: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-969280 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0831 22:55:26.506679   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.513032   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.524370   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.545675   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.587989   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.669369   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:26.830687   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:27.151931   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:27.793849   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:29.075383   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-969280 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m11.926802758s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-919347 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 22:55:47.000723   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-919347 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m7.120645608s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-324128 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 22:56:48.444019   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-324128 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m0.780067753s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-919347 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [386c3816-8ecd-4471-b309-e821413b01c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [386c3816-8ecd-4471-b309-e821413b01c6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004841605s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-919347 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-919347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-919347 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-919347 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-919347 --alsologtostderr -v=3: (10.785569577s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-919347 -n no-preload-919347
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-919347 -n no-preload-919347: exit status 7 (111.609299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-919347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-919347 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-919347 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.297220343s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-919347 -n no-preload-919347
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-324128 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [66ec3f91-d1b0-4445-a1f6-001a1f76b9b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [66ec3f91-d1b0-4445-a1f6-001a1f76b9b9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003686564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-324128 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-969280 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [1d2c68d9-d18d-4538-851b-17ebb21d1844] Pending
helpers_test.go:345: "busybox" [1d2c68d9-d18d-4538-851b-17ebb21d1844] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [1d2c68d9-d18d-4538-851b-17ebb21d1844] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003137178s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-969280 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-324128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-324128 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-324128 --alsologtostderr -v=3
E0831 22:57:40.468619   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-324128 --alsologtostderr -v=3: (10.649224337s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-969280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-969280 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-969280 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-969280 --alsologtostderr -v=3: (10.671441106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128: exit status 7 (92.944821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-324128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-324128 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-324128 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.822702067s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-969280 -n old-k8s-version-969280
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-969280 -n old-k8s-version-969280: exit status 7 (61.451001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-969280 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-969280 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0831 22:57:54.400550   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:10.365969   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-969280 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.640085174s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-969280 -n old-k8s-version-969280
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-331813 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-331813 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (26.061913996s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-331813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-331813 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-331813 --alsologtostderr -v=3: (10.67709591s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331813 -n newest-cni-331813
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331813 -n newest-cni-331813: exit status 7 (92.946892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-331813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-331813 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-331813 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (12.884428027s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331813 -n newest-cni-331813
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-331813 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-331813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331813 -n newest-cni-331813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331813 -n newest-cni-331813: exit status 2 (268.003442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-331813 -n newest-cni-331813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-331813 -n newest-cni-331813: exit status 2 (272.004911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-331813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331813 -n newest-cni-331813
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-331813 -n newest-cni-331813
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-176311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 22:59:51.334713   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/addons-062019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-176311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m9.164179451s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-h7gsp" [a1c5d15f-088e-4371-8ac0-4b60e2e3bbec] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003761982s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-h7gsp" [a1c5d15f-088e-4371-8ac0-4b60e2e3bbec] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003745366s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-969280 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-969280 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-969280 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-969280 -n old-k8s-version-969280
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-969280 -n old-k8s-version-969280: exit status 2 (277.523126ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-969280 -n old-k8s-version-969280
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-969280 -n old-k8s-version-969280: exit status 2 (295.976918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-969280 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-969280 -n old-k8s-version-969280
E0831 23:00:26.506506   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-969280 -n old-k8s-version-969280
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0831 23:00:43.535355   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:00:54.207316   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m8.32221075s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-176311 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [78a6f06e-9b31-4392-866c-293fe96d70cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [78a6f06e-9b31-4392-866c-293fe96d70cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.002717355s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-176311 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-176311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-176311 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-176311 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-176311 --alsologtostderr -v=3: (10.774580201s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-176311 -n embed-certs-176311
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-176311 -n embed-certs-176311: exit status 7 (116.891049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-176311 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-176311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-176311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.148499494s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-176311 -n embed-certs-176311
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-ntf25" [d11c3986-d5a0-455d-87fd-c867f5455ff6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004213802s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-ntf25" [d11c3986-d5a0-455d-87fd-c867f5455ff6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003469316s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-919347 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-btjvt" [d9d547c5-d685-4f38-b442-68feb8b91f3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-btjvt" [d9d547c5-d685-4f38-b442-68feb8b91f3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004111789s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-919347 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-919347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-919347 -n no-preload-919347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-919347 -n no-preload-919347: exit status 2 (277.151479ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-919347 -n no-preload-919347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-919347 -n no-preload-919347: exit status 2 (266.280637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-919347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-919347 -n no-preload-919347
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-919347 -n no-preload-919347
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.99561466s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0831 23:02:09.656434   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/no-preload-919347/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (55.348150089s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-j5t67" [df06bc90-b63e-4229-a891-6a3daef93d29] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003183389s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-j5t67" [df06bc90-b63e-4229-a891-6a3daef93d29] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003243093s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-324128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-324128 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-324128 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128: exit status 2 (366.004215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128: exit status 2 (403.67064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-324128 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324128 -n default-k8s-diff-port-324128
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0831 23:02:30.137765   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/no-preload-919347/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.797240   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.803611   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.814994   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.836355   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.877795   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:30.959382   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:31.121293   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:31.443069   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:32.084976   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:33.366572   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:35.928056   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:40.469106   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/functional-369865/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:02:41.049364   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.79593116s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:345: "kindnet-c5jk2" [d583f6df-81f0-4533-b480-a44623b6d5cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006260322s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-vztnq" [4ca7e792-cc26-49bc-9bc7-f6600e6f977e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:02:51.291514   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-vztnq" [4ca7e792-cc26-49bc-9bc7-f6600e6f977e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003776031s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:345: "calico-node-kjr64" [a1616c25-eda2-40e0-ae5f-a2ff82a88b9d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005184332s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-gcs84" [b073afae-ef2c-4c33-99c1-d4b73ee5c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:03:11.099541   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/no-preload-919347/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:03:11.772974   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-gcs84" [b073afae-ef2c-4c33-99c1-d4b73ee5c9b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004728948s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-94jdp" [5b65c1ed-247d-46e6-a4ce-8b7f913f40c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-94jdp" [5b65c1ed-247d-46e6-a4ce-8b7f913f40c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003981301s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (42.607136533s)
--- PASS: TestNetworkPlugins/group/false/Start (42.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (36.519232819s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0831 23:03:52.734456   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/old-k8s-version-969280/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (44.823316468s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-9dswj" [f20036af-1c5c-4796-ba2d-b78920d3a34a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-9dswj" [f20036af-1c5c-4796-ba2d-b78920d3a34a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003987057s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (21.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-252109 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-252109 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127491085s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-252109 exec deployment/netcat -- nslookup kubernetes.default
E0831 23:04:33.021493   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/no-preload-919347/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context false-252109 exec deployment/netcat -- nslookup kubernetes.default: (5.141595689s)
--- PASS: TestNetworkPlugins/group/false/DNS (21.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-gn852" [3ff747c5-0e8e-4d46-9baa-d9dfe5cb8528] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-gn852" [3ff747c5-0e8e-4d46-9baa-d9dfe5cb8528] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004372095s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:345: "kube-flannel-ds-wc8wl" [f5137e37-ec05-4bda-bb40-31d7114c4fb0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005128382s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-lwm6c" [5d63e600-b741-4aa1-84a3-969a41ce6a5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-lwm6c" [5d63e600-b741-4aa1-84a3-969a41ce6a5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003297043s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (39.003633568s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (34.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-252109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (34.458794974s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (34.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-8cqvj" [699286e8-d602-4ba5-b7fa-b3d12f37808b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-8cqvj" [699286e8-d602-4ba5-b7fa-b3d12f37808b] Running
E0831 23:05:26.506815   19777 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/skaffold-503448/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003777999s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-252109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-252109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-xzd6z" [2f3533a8-89c8-4b29-948f-37897b04d09c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-xzd6z" [2f3533a8-89c8-4b29-948f-37897b04d09c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003482783s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-252109 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-252109 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125601039s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-252109 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-252109 exec deployment/netcat -- nslookup kubernetes.default: (5.126454708s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-252109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-h8ltl" [41fd8b2f-d528-4bda-bbf5-71a8aa3a2aca] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003347975s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-h8ltl" [41fd8b2f-d528-4bda-bbf5-71a8aa3a2aca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004370525s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-176311 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-176311 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-176311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-176311 -n embed-certs-176311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-176311 -n embed-certs-176311: exit status 2 (287.876818ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-176311 -n embed-certs-176311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-176311 -n embed-certs-176311: exit status 2 (307.138366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-176311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-176311 -n embed-certs-176311
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-176311 -n embed-certs-176311
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-252109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (20/353)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-193052" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-193052
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-252109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-252109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 22:53:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-453282
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 22:51:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-202216
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-12963/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 22:54:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.103.2:8443
name: pause-299845
contexts:
- context:
cluster: NoKubernetes-453282
extensions:
- extension:
last-update: Sat, 31 Aug 2024 22:53:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: NoKubernetes-453282
name: NoKubernetes-453282
- context:
cluster: kubernetes-upgrade-202216
user: kubernetes-upgrade-202216
name: kubernetes-upgrade-202216
- context:
cluster: pause-299845
extensions:
- extension:
last-update: Sat, 31 Aug 2024 22:54:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-299845
name: pause-299845
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-453282
user:
client-certificate: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/NoKubernetes-453282/client.crt
client-key: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/NoKubernetes-453282/client.key
- name: kubernetes-upgrade-202216
user:
client-certificate: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/kubernetes-upgrade-202216/client.crt
client-key: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/kubernetes-upgrade-202216/client.key
- name: pause-299845
user:
client-certificate: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/pause-299845/client.crt
client-key: /home/jenkins/minikube-integration/18943-12963/.minikube/profiles/pause-299845/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-252109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-252109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252109"

                                                
                                                
----------------------- debugLogs end: cilium-252109 [took: 2.935711207s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-252109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-252109
--- SKIP: TestNetworkPlugins/group/cilium (3.08s)

                                                
                                    
Copied to clipboard