Test Report: Docker_Linux 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.39
x
+
TestAddons/parallel/Registry (73.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.964117ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-mdbsb" [6646693e-e468-4f8c-a209-9f028e31da67] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002723714s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fjxbz" [6340cd55-7e16-4315-8b01-5e879a2b0d76] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002740614s
addons_test.go:342: (dbg) Run:  kubectl --context addons-207808 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-207808 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-207808 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.077011015s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-207808 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 ip
2024/09/12 21:42:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-207808
helpers_test.go:235: (dbg) docker inspect addons-207808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d",
	        "Created": "2024-09-12T21:29:47.830700097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14638,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T21:29:47.964008126Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1e046fff9d873d0625e7bcc757c3514a16d475711d13430b9690fa498decc3a8",
	        "ResolvConfPath": "/var/lib/docker/containers/46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d/hostname",
	        "HostsPath": "/var/lib/docker/containers/46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d/hosts",
	        "LogPath": "/var/lib/docker/containers/46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d/46d5993b8529191d590b4bd4995f87689a3729efa0f26a999bb5e8711add198d-json.log",
	        "Name": "/addons-207808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-207808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-207808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bebd4167dc6ae140c935d93fbe82abd9f6c2bfed6d6220fa9d55349c6b1adb29-init/diff:/var/lib/docker/overlay2/a3952d20b945774e14a25a8bf698b00862be22019b42328b7689b583b03e6963/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bebd4167dc6ae140c935d93fbe82abd9f6c2bfed6d6220fa9d55349c6b1adb29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bebd4167dc6ae140c935d93fbe82abd9f6c2bfed6d6220fa9d55349c6b1adb29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bebd4167dc6ae140c935d93fbe82abd9f6c2bfed6d6220fa9d55349c6b1adb29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-207808",
	                "Source": "/var/lib/docker/volumes/addons-207808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-207808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-207808",
	                "name.minikube.sigs.k8s.io": "addons-207808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35eee3aee247cc6527c8c38a2e2b7b31a1f5ae32ee73cd312dc8e8bce4c2d597",
	            "SandboxKey": "/var/run/docker/netns/35eee3aee247",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-207808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3b892a7412044c1e7d75238359e45f33a1841401b47fb996d4dffacf20c04e0d",
	                    "EndpointID": "aa2e652f07156f27cf455f58bcf8f6618dbcc4f2b6ddfbf833de1512c5c596bc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-207808",
	                        "46d5993b8529"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-207808 -n addons-207808
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-093887                                                                   | download-docker-093887 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-374984   | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | binary-mirror-374984                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41283                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-374984                                                                     | binary-mirror-374984   | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-207808                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-207808                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-207808 --wait=true                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:33 UTC | 12 Sep 24 21:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | -p addons-207808                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-207808 ssh cat                                                                       | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | /opt/local-path-provisioner/pvc-b1ba2409-c488-4cdf-b0b8-4d252d606c73_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | addons-207808                                                                               |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | -p addons-207808                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-207808 addons                                                                        | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-207808 addons                                                                        | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | addons-207808                                                                               |                        |         |         |                     |                     |
	| addons  | addons-207808 addons                                                                        | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-207808 ssh curl -s                                                                   | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-207808 ip                                                                            | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-207808 ip                                                                            | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	| addons  | addons-207808 addons disable                                                                | addons-207808          | jenkins | v1.34.0 | 12 Sep 24 21:42 UTC | 12 Sep 24 21:42 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:26
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:26.510487   13904 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:26.510713   13904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:26.510720   13904 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:26.510725   13904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:26.510891   13904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:29:26.511484   13904 out.go:352] Setting JSON to false
	I0912 21:29:26.512268   13904 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":709,"bootTime":1726175857,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:26.512325   13904 start.go:139] virtualization: kvm guest
	I0912 21:29:26.514462   13904 out.go:177] * [addons-207808] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:26.515855   13904 notify.go:220] Checking for updates...
	I0912 21:29:26.515874   13904 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:29:26.517167   13904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:26.518513   13904 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:29:26.519810   13904 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	I0912 21:29:26.521203   13904 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:29:26.522515   13904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:29:26.523896   13904 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:26.545225   13904 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:29:26.545330   13904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:29:26.593830   13904 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-12 21:29:26.58497424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:29:26.593946   13904 docker.go:318] overlay module found
	I0912 21:29:26.595723   13904 out.go:177] * Using the docker driver based on user configuration
	I0912 21:29:26.597020   13904 start.go:297] selected driver: docker
	I0912 21:29:26.597041   13904 start.go:901] validating driver "docker" against <nil>
	I0912 21:29:26.597054   13904 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:29:26.597805   13904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:29:26.642540   13904 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-12 21:29:26.633767514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:29:26.642737   13904 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:26.642942   13904 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:29:26.644840   13904 out.go:177] * Using Docker driver with root privileges
	I0912 21:29:26.646178   13904 cni.go:84] Creating CNI manager for ""
	I0912 21:29:26.646198   13904 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:29:26.646208   13904 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:26.646267   13904 start.go:340] cluster config:
	{Name:addons-207808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-207808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:26.647568   13904 out.go:177] * Starting "addons-207808" primary control-plane node in "addons-207808" cluster
	I0912 21:29:26.648798   13904 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:29:26.650134   13904 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:29:26.651478   13904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:26.651500   13904 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:29:26.651509   13904 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:29:26.651521   13904 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:26.651629   13904 preload.go:172] Found /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 21:29:26.651647   13904 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 21:29:26.651967   13904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/config.json ...
	I0912 21:29:26.651990   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/config.json: {Name:mk8481da6b54576dc871eb043aa2d3b29d139204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:26.667151   13904 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:29:26.667254   13904 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:29:26.667269   13904 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 21:29:26.667273   13904 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 21:29:26.667281   13904 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 21:29:26.667288   13904 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 21:29:38.593119   13904 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 21:29:38.593164   13904 cache.go:194] Successfully downloaded all kic artifacts
	I0912 21:29:38.593202   13904 start.go:360] acquireMachinesLock for addons-207808: {Name:mk9e28e5e398d3a60a4034b6150283157ca43597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:38.593295   13904 start.go:364] duration metric: took 74.623µs to acquireMachinesLock for "addons-207808"
	I0912 21:29:38.593319   13904 start.go:93] Provisioning new machine with config: &{Name:addons-207808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-207808 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:29:38.593403   13904 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:29:38.595297   13904 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 21:29:38.595532   13904 start.go:159] libmachine.API.Create for "addons-207808" (driver="docker")
	I0912 21:29:38.595561   13904 client.go:168] LocalClient.Create starting
	I0912 21:29:38.595647   13904 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem
	I0912 21:29:38.763880   13904 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/cert.pem
	I0912 21:29:38.831665   13904 cli_runner.go:164] Run: docker network inspect addons-207808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:29:38.846747   13904 cli_runner.go:211] docker network inspect addons-207808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:29:38.846829   13904 network_create.go:284] running [docker network inspect addons-207808] to gather additional debugging logs...
	I0912 21:29:38.846853   13904 cli_runner.go:164] Run: docker network inspect addons-207808
	W0912 21:29:38.861333   13904 cli_runner.go:211] docker network inspect addons-207808 returned with exit code 1
	I0912 21:29:38.861366   13904 network_create.go:287] error running [docker network inspect addons-207808]: docker network inspect addons-207808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-207808 not found
	I0912 21:29:38.861391   13904 network_create.go:289] output of [docker network inspect addons-207808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-207808 not found
	
	** /stderr **
	I0912 21:29:38.861543   13904 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:29:38.876787   13904 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018c9d90}
	I0912 21:29:38.876828   13904 network_create.go:124] attempt to create docker network addons-207808 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0912 21:29:38.876879   13904 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-207808 addons-207808
	I0912 21:29:38.933636   13904 network_create.go:108] docker network addons-207808 192.168.49.0/24 created
	I0912 21:29:38.933669   13904 kic.go:121] calculated static IP "192.168.49.2" for the "addons-207808" container
	I0912 21:29:38.933732   13904 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:29:38.949074   13904 cli_runner.go:164] Run: docker volume create addons-207808 --label name.minikube.sigs.k8s.io=addons-207808 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:29:38.965269   13904 oci.go:103] Successfully created a docker volume addons-207808
	I0912 21:29:38.965365   13904 cli_runner.go:164] Run: docker run --rm --name addons-207808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-207808 --entrypoint /usr/bin/test -v addons-207808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
	I0912 21:29:43.776870   13904 cli_runner.go:217] Completed: docker run --rm --name addons-207808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-207808 --entrypoint /usr/bin/test -v addons-207808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (4.81145982s)
	I0912 21:29:43.776894   13904 oci.go:107] Successfully prepared a docker volume addons-207808
	I0912 21:29:43.776909   13904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:43.776926   13904 kic.go:194] Starting extracting preloaded images to volume ...
	I0912 21:29:43.776981   13904 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-207808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:29:47.766700   13904 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-207808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (3.989677319s)
	I0912 21:29:47.766733   13904 kic.go:203] duration metric: took 3.989803597s to extract preloaded images to volume ...
	W0912 21:29:47.766871   13904 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 21:29:47.766986   13904 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:29:47.815776   13904 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-207808 --name addons-207808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-207808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-207808 --network addons-207808 --ip 192.168.49.2 --volume addons-207808:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
	I0912 21:29:48.121715   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Running}}
	I0912 21:29:48.140013   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:29:48.158669   13904 cli_runner.go:164] Run: docker exec addons-207808 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:29:48.199657   13904 oci.go:144] the created container "addons-207808" has a running status.
	I0912 21:29:48.199687   13904 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa...
	I0912 21:29:48.379535   13904 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:29:48.403622   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:29:48.421551   13904 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:29:48.421571   13904 kic_runner.go:114] Args: [docker exec --privileged addons-207808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:29:48.541642   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:29:48.564279   13904 machine.go:93] provisionDockerMachine start ...
	I0912 21:29:48.564351   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:48.584724   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:48.584930   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:48.584945   13904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 21:29:48.742155   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-207808
	
	I0912 21:29:48.742178   13904 ubuntu.go:169] provisioning hostname "addons-207808"
	I0912 21:29:48.742223   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:48.759557   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:48.759761   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:48.759780   13904 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-207808 && echo "addons-207808" | sudo tee /etc/hostname
	I0912 21:29:48.884324   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-207808
	
	I0912 21:29:48.884389   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:48.900874   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:48.901036   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:48.901052   13904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-207808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-207808/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-207808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:29:49.014583   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:29:49.014608   13904 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5723/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5723/.minikube}
	I0912 21:29:49.014628   13904 ubuntu.go:177] setting up certificates
	I0912 21:29:49.014641   13904 provision.go:84] configureAuth start
	I0912 21:29:49.014702   13904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-207808
	I0912 21:29:49.030162   13904 provision.go:143] copyHostCerts
	I0912 21:29:49.030236   13904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5723/.minikube/ca.pem (1078 bytes)
	I0912 21:29:49.030362   13904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5723/.minikube/cert.pem (1123 bytes)
	I0912 21:29:49.030427   13904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5723/.minikube/key.pem (1679 bytes)
	I0912 21:29:49.030476   13904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca-key.pem org=jenkins.addons-207808 san=[127.0.0.1 192.168.49.2 addons-207808 localhost minikube]
	I0912 21:29:49.145040   13904 provision.go:177] copyRemoteCerts
	I0912 21:29:49.145092   13904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:29:49.145142   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:49.162150   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:29:49.247117   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:29:49.268085   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:29:49.288579   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 21:29:49.309083   13904 provision.go:87] duration metric: took 294.430185ms to configureAuth
	I0912 21:29:49.309106   13904 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:29:49.309283   13904 config.go:182] Loaded profile config "addons-207808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:29:49.309348   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:49.325771   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:49.325985   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:49.325999   13904 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 21:29:49.442984   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0912 21:29:49.443007   13904 ubuntu.go:71] root file system type: overlay
	I0912 21:29:49.443140   13904 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 21:29:49.443201   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:49.459989   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:49.460158   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:49.460215   13904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 21:29:49.589434   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 21:29:49.589503   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:49.605811   13904 main.go:141] libmachine: Using SSH client type: native
	I0912 21:29:49.605974   13904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0912 21:29:49.605991   13904 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 21:29:50.274178   13904 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-12 21:29:49.584594789 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0912 21:29:50.274204   13904 machine.go:96] duration metric: took 1.709904611s to provisionDockerMachine
	I0912 21:29:50.274215   13904 client.go:171] duration metric: took 11.678649015s to LocalClient.Create
	I0912 21:29:50.274233   13904 start.go:167] duration metric: took 11.67870124s to libmachine.API.Create "addons-207808"
	I0912 21:29:50.274242   13904 start.go:293] postStartSetup for "addons-207808" (driver="docker")
	I0912 21:29:50.274256   13904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:29:50.274311   13904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:29:50.274358   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:50.290564   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:29:50.375413   13904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:29:50.378379   13904 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:29:50.378407   13904 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:29:50.378421   13904 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:29:50.378427   13904 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 21:29:50.378437   13904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5723/.minikube/addons for local assets ...
	I0912 21:29:50.378495   13904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5723/.minikube/files for local assets ...
	I0912 21:29:50.378518   13904 start.go:296] duration metric: took 104.268207ms for postStartSetup
	I0912 21:29:50.378764   13904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-207808
	I0912 21:29:50.395475   13904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/config.json ...
	I0912 21:29:50.395733   13904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:29:50.395783   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:50.411956   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:29:50.495377   13904 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:29:50.499285   13904 start.go:128] duration metric: took 11.905870064s to createHost
	I0912 21:29:50.499309   13904 start.go:83] releasing machines lock for "addons-207808", held for 11.906003779s
	I0912 21:29:50.499360   13904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-207808
	I0912 21:29:50.515299   13904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:29:50.515300   13904 ssh_runner.go:195] Run: cat /version.json
	I0912 21:29:50.515421   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:50.515432   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:29:50.532492   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:29:50.533009   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:29:50.614349   13904 ssh_runner.go:195] Run: systemctl --version
	I0912 21:29:50.617972   13904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:29:50.688351   13904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 21:29:50.710654   13904 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:29:50.710720   13904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:29:50.734788   13904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 21:29:50.734818   13904 start.go:495] detecting cgroup driver to use...
	I0912 21:29:50.734851   13904 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:29:50.734977   13904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:29:50.748693   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:29:50.756900   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:29:50.765069   13904 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 21:29:50.765115   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 21:29:50.773490   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:29:50.781907   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:29:50.790212   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:29:50.798514   13904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:29:50.806773   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:29:50.815185   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:29:50.823612   13904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:29:50.832105   13904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:29:50.839580   13904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:29:50.846569   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:29:50.919523   13904 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:29:51.007514   13904 start.go:495] detecting cgroup driver to use...
	I0912 21:29:51.007563   13904 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:29:51.007611   13904 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 21:29:51.018304   13904 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0912 21:29:51.018388   13904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 21:29:51.029082   13904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:29:51.044620   13904 ssh_runner.go:195] Run: which cri-dockerd
	I0912 21:29:51.047768   13904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 21:29:51.056517   13904 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 21:29:51.073334   13904 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 21:29:51.167663   13904 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 21:29:51.272366   13904 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 21:29:51.272516   13904 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 21:29:51.288980   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:29:51.364323   13904 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 21:29:51.603677   13904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 21:29:51.613966   13904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:29:51.624184   13904 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 21:29:51.707746   13904 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 21:29:51.779576   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:29:51.856088   13904 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 21:29:51.867916   13904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:29:51.877609   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:29:51.948974   13904 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 21:29:52.006021   13904 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 21:29:52.006104   13904 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 21:29:52.009331   13904 start.go:563] Will wait 60s for crictl version
	I0912 21:29:52.009375   13904 ssh_runner.go:195] Run: which crictl
	I0912 21:29:52.012359   13904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:29:52.044115   13904 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 21:29:52.044179   13904 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:29:52.066830   13904 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:29:52.091460   13904 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 21:29:52.091522   13904 cli_runner.go:164] Run: docker network inspect addons-207808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:29:52.106808   13904 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:29:52.109983   13904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:29:52.119381   13904 kubeadm.go:883] updating cluster {Name:addons-207808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-207808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:29:52.119489   13904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:52.119535   13904 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:29:52.138148   13904 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:29:52.138169   13904 docker.go:615] Images already preloaded, skipping extraction
	I0912 21:29:52.138225   13904 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:29:52.155730   13904 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:29:52.155769   13904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:29:52.155787   13904 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0912 21:29:52.155901   13904 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-207808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-207808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:29:52.155965   13904 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 21:29:52.198408   13904 cni.go:84] Creating CNI manager for ""
	I0912 21:29:52.198432   13904 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:29:52.198442   13904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:29:52.198463   13904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-207808 NodeName:addons-207808 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:29:52.198618   13904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-207808"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:29:52.198679   13904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:29:52.206712   13904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:29:52.206771   13904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:29:52.214487   13904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0912 21:29:52.230090   13904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:29:52.245611   13904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0912 21:29:52.261035   13904 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:29:52.264243   13904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:29:52.273857   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:29:52.345655   13904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:29:52.357921   13904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808 for IP: 192.168.49.2
	I0912 21:29:52.357939   13904 certs.go:194] generating shared ca certs ...
	I0912 21:29:52.357951   13904 certs.go:226] acquiring lock for ca certs: {Name:mk9f28859b4d312e5b4155554040e74e885f9892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.358065   13904 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5723/.minikube/ca.key
	I0912 21:29:52.501309   13904 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt ...
	I0912 21:29:52.501337   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt: {Name:mkc56f678fc592b8474ef4912f787bfdfc458c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.501524   13904 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5723/.minikube/ca.key ...
	I0912 21:29:52.501538   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/ca.key: {Name:mk60dc58220fcaaac43a0d3a605359feeb5f6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.501638   13904 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.key
	I0912 21:29:52.670143   13904 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.crt ...
	I0912 21:29:52.670172   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.crt: {Name:mk2698372e86c4c57b679a30ca4eb2ad1efc1cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.670324   13904 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.key ...
	I0912 21:29:52.670335   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.key: {Name:mk345935fc27c646eeea4f4259dab61db43551ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.670404   13904 certs.go:256] generating profile certs ...
	I0912 21:29:52.670455   13904 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.key
	I0912 21:29:52.670468   13904 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt with IP's: []
	I0912 21:29:52.871873   13904 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt ...
	I0912 21:29:52.871900   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: {Name:mkd3c9ab3428fe5906101cb663ee83957d0b60a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.872053   13904 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.key ...
	I0912 21:29:52.872059   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.key: {Name:mk198c3732cf2d990e0944a2c7e8c85da4e92354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.872123   13904 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key.5b4b0a4c
	I0912 21:29:52.872141   13904 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt.5b4b0a4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0912 21:29:52.969625   13904 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt.5b4b0a4c ...
	I0912 21:29:52.969650   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt.5b4b0a4c: {Name:mkd4dbc4603877a34cb4e1c2a8ef90a8dbad8496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.969789   13904 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key.5b4b0a4c ...
	I0912 21:29:52.969801   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key.5b4b0a4c: {Name:mke39ebe334ed0c02224e9f7deb725dde52a4531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:52.969865   13904 certs.go:381] copying /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt.5b4b0a4c -> /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt
	I0912 21:29:52.969930   13904 certs.go:385] copying /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key.5b4b0a4c -> /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key
	I0912 21:29:52.969975   13904 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.key
	I0912 21:29:52.969991   13904 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.crt with IP's: []
	I0912 21:29:53.257722   13904 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.crt ...
	I0912 21:29:53.257753   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.crt: {Name:mkc6b51a9c8668213ba863cb3ab92fb9cefaabe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:53.257907   13904 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.key ...
	I0912 21:29:53.257916   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.key: {Name:mkbe34557ed5d6f016b9d98e11742b207903fcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:53.258071   13904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:29:53.258108   13904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/ca.pem (1078 bytes)
	I0912 21:29:53.258131   13904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:29:53.258156   13904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5723/.minikube/certs/key.pem (1679 bytes)
	I0912 21:29:53.258747   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:29:53.280799   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 21:29:53.301562   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:29:53.322358   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:29:53.343178   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:29:53.363766   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:29:53.384457   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:29:53.405466   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 21:29:53.425876   13904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:29:53.446060   13904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:29:53.461087   13904 ssh_runner.go:195] Run: openssl version
	I0912 21:29:53.465974   13904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:29:53.474743   13904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:53.477855   13904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:53.477898   13904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:53.484015   13904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:29:53.492117   13904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:29:53.494928   13904 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:29:53.494991   13904 kubeadm.go:392] StartCluster: {Name:addons-207808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-207808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:53.495093   13904 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 21:29:53.511274   13904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:29:53.519103   13904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:29:53.526797   13904 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:29:53.526844   13904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:29:53.534352   13904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:29:53.534369   13904 kubeadm.go:157] found existing configuration files:
	
	I0912 21:29:53.534411   13904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:29:53.542031   13904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:29:53.542085   13904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:29:53.549542   13904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:29:53.557004   13904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:29:53.557059   13904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:29:53.564348   13904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:29:53.571907   13904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:29:53.571955   13904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:29:53.579217   13904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:29:53.586635   13904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:29:53.586682   13904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:29:53.593751   13904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:29:53.627081   13904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:29:53.627128   13904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:29:53.644324   13904 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0912 21:29:53.644399   13904 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0912 21:29:53.644464   13904 kubeadm.go:310] OS: Linux
	I0912 21:29:53.644521   13904 kubeadm.go:310] CGROUPS_CPU: enabled
	I0912 21:29:53.644560   13904 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0912 21:29:53.644604   13904 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0912 21:29:53.644642   13904 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0912 21:29:53.644707   13904 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0912 21:29:53.644780   13904 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0912 21:29:53.644854   13904 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0912 21:29:53.644906   13904 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0912 21:29:53.644946   13904 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0912 21:29:53.691442   13904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:29:53.691539   13904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:29:53.691638   13904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:29:53.704053   13904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:29:53.707645   13904 out.go:235]   - Generating certificates and keys ...
	I0912 21:29:53.707764   13904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:29:53.707867   13904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:29:53.845730   13904 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:29:53.968295   13904 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:29:54.108860   13904 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:29:54.281685   13904 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:29:54.377426   13904 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:29:54.377599   13904 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-207808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:29:54.659094   13904 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:29:54.659249   13904 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-207808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:29:54.815300   13904 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:29:55.195621   13904 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:29:55.313792   13904 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:29:55.313989   13904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:29:55.494291   13904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:29:55.689984   13904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:29:55.767039   13904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:29:55.953890   13904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:29:56.108313   13904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:29:56.108749   13904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:29:56.111100   13904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:29:56.113198   13904 out.go:235]   - Booting up control plane ...
	I0912 21:29:56.113342   13904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:29:56.113520   13904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:29:56.113665   13904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:29:56.122675   13904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:29:56.127746   13904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:29:56.127807   13904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:29:56.212312   13904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:29:56.212411   13904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:29:56.714192   13904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.893561ms
	I0912 21:29:56.714305   13904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:30:01.215979   13904 kubeadm.go:310] [api-check] The API server is healthy after 4.501781101s
	I0912 21:30:01.226470   13904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:30:01.237092   13904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:30:01.256086   13904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:30:01.256326   13904 kubeadm.go:310] [mark-control-plane] Marking the node addons-207808 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:30:01.263531   13904 kubeadm.go:310] [bootstrap-token] Using token: suugsy.pqb0hml2cfmbk8du
	I0912 21:30:01.265390   13904 out.go:235]   - Configuring RBAC rules ...
	I0912 21:30:01.265534   13904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:30:01.268482   13904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:30:01.275132   13904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:30:01.277538   13904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:30:01.279902   13904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:30:01.282433   13904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:30:01.621452   13904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:30:02.058185   13904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:30:02.621139   13904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:30:02.621937   13904 kubeadm.go:310] 
	I0912 21:30:02.622029   13904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:30:02.622040   13904 kubeadm.go:310] 
	I0912 21:30:02.622146   13904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:30:02.622157   13904 kubeadm.go:310] 
	I0912 21:30:02.622198   13904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:30:02.622288   13904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:30:02.622363   13904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:30:02.622373   13904 kubeadm.go:310] 
	I0912 21:30:02.622467   13904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:30:02.622487   13904 kubeadm.go:310] 
	I0912 21:30:02.622560   13904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:30:02.622570   13904 kubeadm.go:310] 
	I0912 21:30:02.622647   13904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:30:02.622773   13904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:30:02.622876   13904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:30:02.622889   13904 kubeadm.go:310] 
	I0912 21:30:02.623030   13904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:30:02.623143   13904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:30:02.623153   13904 kubeadm.go:310] 
	I0912 21:30:02.623293   13904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token suugsy.pqb0hml2cfmbk8du \
	I0912 21:30:02.623454   13904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995bc7e43594bb0c046233ae8059535de7d0e8faaa285ff35d9af3858a82acc4 \
	I0912 21:30:02.623490   13904 kubeadm.go:310] 	--control-plane 
	I0912 21:30:02.623502   13904 kubeadm.go:310] 
	I0912 21:30:02.623630   13904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:30:02.623639   13904 kubeadm.go:310] 
	I0912 21:30:02.623742   13904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token suugsy.pqb0hml2cfmbk8du \
	I0912 21:30:02.623878   13904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995bc7e43594bb0c046233ae8059535de7d0e8faaa285ff35d9af3858a82acc4 
	I0912 21:30:02.625887   13904 kubeadm.go:310] W0912 21:29:53.624636    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:02.626205   13904 kubeadm.go:310] W0912 21:29:53.625270    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:02.626451   13904 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0912 21:30:02.626595   13904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:30:02.626624   13904 cni.go:84] Creating CNI manager for ""
	I0912 21:30:02.626648   13904 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:30:02.628300   13904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:30:02.629430   13904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:30:02.638511   13904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:30:02.655207   13904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:30:02.655289   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:02.655336   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-207808 minikube.k8s.io/updated_at=2024_09_12T21_30_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-207808 minikube.k8s.io/primary=true
	I0912 21:30:02.737003   13904 ops.go:34] apiserver oom_adj: -16
	I0912 21:30:02.737033   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:03.237125   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:03.737319   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:04.237387   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:04.737103   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:05.237237   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:05.737702   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:06.237275   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:06.737490   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:07.237069   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:07.738042   13904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:07.806460   13904 kubeadm.go:1113] duration metric: took 5.151221309s to wait for elevateKubeSystemPrivileges
	I0912 21:30:07.806494   13904 kubeadm.go:394] duration metric: took 14.311502028s to StartCluster
	I0912 21:30:07.806515   13904 settings.go:142] acquiring lock: {Name:mk2d37c2f531fa16878dd10abfcfc5daf090ef07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:07.806623   13904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:30:07.806954   13904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/kubeconfig: {Name:mk37c718bc544b1cff45c15afa951be50347f04b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:07.807200   13904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:30:07.807209   13904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:30:07.807267   13904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:30:07.807379   13904 addons.go:69] Setting yakd=true in profile "addons-207808"
	I0912 21:30:07.807406   13904 addons.go:234] Setting addon yakd=true in "addons-207808"
	I0912 21:30:07.807408   13904 config.go:182] Loaded profile config "addons-207808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:30:07.807437   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.807467   13904 addons.go:69] Setting inspektor-gadget=true in profile "addons-207808"
	I0912 21:30:07.807493   13904 addons.go:234] Setting addon inspektor-gadget=true in "addons-207808"
	I0912 21:30:07.807524   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.807692   13904 addons.go:69] Setting gcp-auth=true in profile "addons-207808"
	I0912 21:30:07.807732   13904 mustload.go:65] Loading cluster: addons-207808
	I0912 21:30:07.807918   13904 config.go:182] Loaded profile config "addons-207808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:30:07.807938   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.807972   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.808091   13904 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-207808"
	I0912 21:30:07.808155   13904 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-207808"
	I0912 21:30:07.808149   13904 addons.go:69] Setting default-storageclass=true in profile "addons-207808"
	I0912 21:30:07.808186   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.808192   13904 addons.go:69] Setting cloud-spanner=true in profile "addons-207808"
	I0912 21:30:07.808199   13904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-207808"
	I0912 21:30:07.808217   13904 addons.go:234] Setting addon cloud-spanner=true in "addons-207808"
	I0912 21:30:07.808270   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.808489   13904 addons.go:69] Setting helm-tiller=true in profile "addons-207808"
	I0912 21:30:07.808516   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.808526   13904 addons.go:69] Setting ingress=true in profile "addons-207808"
	I0912 21:30:07.808544   13904 addons.go:234] Setting addon ingress=true in "addons-207808"
	I0912 21:30:07.808574   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.808709   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.808186   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.808975   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.809176   13904 addons.go:69] Setting ingress-dns=true in profile "addons-207808"
	I0912 21:30:07.809206   13904 addons.go:234] Setting addon ingress-dns=true in "addons-207808"
	I0912 21:30:07.809248   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.809250   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.809693   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.811129   13904 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-207808"
	I0912 21:30:07.812528   13904 out.go:177] * Verifying Kubernetes components...
	I0912 21:30:07.814051   13904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:07.814167   13904 addons.go:69] Setting volumesnapshots=true in profile "addons-207808"
	I0912 21:30:07.814206   13904 addons.go:234] Setting addon volumesnapshots=true in "addons-207808"
	I0912 21:30:07.814245   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.814723   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.817195   13904 addons.go:69] Setting volcano=true in profile "addons-207808"
	I0912 21:30:07.817380   13904 addons.go:234] Setting addon volcano=true in "addons-207808"
	I0912 21:30:07.817432   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.817922   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.829619   13904 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-207808"
	I0912 21:30:07.829795   13904 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-207808"
	I0912 21:30:07.808517   13904 addons.go:234] Setting addon helm-tiller=true in "addons-207808"
	I0912 21:30:07.829851   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.830506   13904 addons.go:69] Setting metrics-server=true in profile "addons-207808"
	I0912 21:30:07.830532   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.830555   13904 addons.go:234] Setting addon metrics-server=true in "addons-207808"
	I0912 21:30:07.830594   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.830617   13904 addons.go:69] Setting registry=true in profile "addons-207808"
	I0912 21:30:07.830698   13904 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-207808"
	I0912 21:30:07.830733   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.831277   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.831703   13904 addons.go:69] Setting storage-provisioner=true in profile "addons-207808"
	I0912 21:30:07.831733   13904 addons.go:234] Setting addon storage-provisioner=true in "addons-207808"
	I0912 21:30:07.831752   13904 addons.go:234] Setting addon registry=true in "addons-207808"
	I0912 21:30:07.831762   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.831795   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.832268   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.832340   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.832880   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.851950   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.852360   13904 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:30:07.853095   13904 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:30:07.853802   13904 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:07.853828   13904 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:30:07.853933   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.855587   13904 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:07.855605   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:30:07.855653   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.868159   13904 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:30:07.868587   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.869883   13904 addons.go:234] Setting addon default-storageclass=true in "addons-207808"
	I0912 21:30:07.869925   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.870783   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.879959   13904 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:07.879981   13904 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:30:07.880072   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.884159   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:30:07.884323   13904 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:30:07.891436   13904 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:07.891689   13904 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:07.891702   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:30:07.891767   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.893205   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:30:07.894838   13904 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:07.894947   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:30:07.896649   13904 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:30:07.897942   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:30:07.898308   13904 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:07.898325   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:30:07.898386   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.900735   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:30:07.902421   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:30:07.903913   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:30:07.905154   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:30:07.906381   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:07.906397   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:30:07.906457   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.906667   13904 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:30:07.908223   13904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:07.908247   13904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:30:07.908305   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.915100   13904 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:30:07.917854   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.920461   13904 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:30:07.920666   13904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:30:07.922545   13904 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:30:07.924911   13904 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:30:07.925166   13904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:07.925185   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:30:07.925242   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.925788   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:07.925805   13904 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:30:07.925855   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.927237   13904 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:30:07.927257   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:30:07.927306   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.928536   13904 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:30:07.929634   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.929635   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.929895   13904 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:07.929986   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:30:07.930086   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.935643   13904 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:30:07.936974   13904 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:07.936994   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:30:07.937053   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.943752   13904 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-207808"
	I0912 21:30:07.943794   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:07.944227   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:07.946885   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.961235   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.972598   13904 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:30:07.972648   13904 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:30:07.974476   13904 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:30:07.974576   13904 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:30:07.975772   13904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:07.975812   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:30:07.975869   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.976263   13904 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:07.976277   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:30:07.976362   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:07.985521   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.988313   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.989934   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:07.993205   13904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:07.993222   13904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:30:07.993265   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:08.001114   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.002108   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.007068   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.007215   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.007827   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.009392   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.016546   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:08.356145   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:08.357522   13904 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:08.357546   13904 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:30:08.432767   13904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:08.432812   13904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:30:08.436296   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:08.442005   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:08.442083   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:30:08.536005   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:08.551124   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:30:08.555937   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:08.632488   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:08.742765   13904 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:08.742854   13904 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:30:08.744572   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:08.744638   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:30:08.832315   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:08.834470   13904 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:08.834499   13904 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:30:08.835582   13904 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:08.835659   13904 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:30:08.838599   13904 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:08.838616   13904 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:30:08.852405   13904 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:08.852486   13904 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:30:08.853062   13904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:08.853124   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:30:09.036409   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:09.042999   13904 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:09.043087   13904 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:30:09.052930   13904 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:09.053018   13904 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:30:09.231669   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:09.231701   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:30:09.236060   13904 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:09.236087   13904 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:30:09.242132   13904 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:09.242214   13904 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:30:09.345938   13904 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:09.346049   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:30:09.352811   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:09.434642   13904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:09.434682   13904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:30:09.447579   13904 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:09.447660   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:30:09.640196   13904 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:09.640226   13904 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:30:09.650073   13904 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:09.650169   13904 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:30:09.733822   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:09.733916   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:30:09.751079   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:09.934582   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:09.934615   13904 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:30:10.041680   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:10.142656   13904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:10.142746   13904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:30:10.335641   13904 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:10.335738   13904 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:30:10.538666   13904 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:10.538749   13904 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:30:10.732748   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:10.733037   13904 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:10.733157   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:30:10.737671   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:10.737744   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:30:11.047414   13904 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:11.047452   13904 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:30:11.132893   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:11.132936   13904 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:30:11.140665   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:11.152927   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.796737958s)
	I0912 21:30:11.153096   13904 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.720264145s)
	I0912 21:30:11.153139   13904 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:30:11.154425   13904 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.721624502s)
	I0912 21:30:11.155560   13904 node_ready.go:35] waiting up to 6m0s for node "addons-207808" to be "Ready" ...
	I0912 21:30:11.154540   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.718153623s)
	I0912 21:30:11.238150   13904 node_ready.go:49] node "addons-207808" has status "Ready":"True"
	I0912 21:30:11.238235   13904 node_ready.go:38] duration metric: took 82.56405ms for node "addons-207808" to be "Ready" ...
	I0912 21:30:11.238261   13904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:11.259345   13904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7whgg" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:11.552386   13904 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:11.552475   13904 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:30:11.733035   13904 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-207808" context rescaled to 1 replicas
	I0912 21:30:11.848740   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:11.848767   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:30:12.344774   13904 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:12.344862   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:30:12.538814   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:12.538885   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:30:12.738265   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:12.844695   13904 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:12.844800   13904 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:30:13.337022   13904 pod_ready.go:93] pod "coredns-7c65d6cfc9-7whgg" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:13.337115   13904 pod_ready.go:82] duration metric: took 2.07769498s for pod "coredns-7c65d6cfc9-7whgg" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:13.337140   13904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqb66" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:13.451623   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:14.344145   13904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqb66" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:14.344408   13904 pod_ready.go:82] duration metric: took 1.007247965s for pod "coredns-7c65d6cfc9-nqb66" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:14.344443   13904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:14.938318   13904 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:30:14.938490   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:14.965422   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:15.545132   13904 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:30:15.748464   13904 addons.go:234] Setting addon gcp-auth=true in "addons-207808"
	I0912 21:30:15.748538   13904 host.go:66] Checking if "addons-207808" exists ...
	I0912 21:30:15.749071   13904 cli_runner.go:164] Run: docker container inspect addons-207808 --format={{.State.Status}}
	I0912 21:30:15.767098   13904 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:30:15.767154   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-207808
	I0912 21:30:15.782591   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/addons-207808/id_rsa Username:docker}
	I0912 21:30:16.352296   13904 pod_ready.go:93] pod "etcd-addons-207808" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:16.352320   13904 pod_ready.go:82] duration metric: took 2.007858822s for pod "etcd-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:16.352333   13904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:16.939200   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.403098741s)
	I0912 21:30:16.939239   13904 addons.go:475] Verifying addon ingress=true in "addons-207808"
	I0912 21:30:16.940721   13904 out.go:177] * Verifying ingress addon...
	I0912 21:30:16.943465   13904 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:30:16.950514   13904 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:30:16.950543   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:17.439238   13904 pod_ready.go:93] pod "kube-apiserver-addons-207808" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:17.439324   13904 pod_ready.go:82] duration metric: took 1.08698066s for pod "kube-apiserver-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.439351   13904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.451657   13904 pod_ready.go:93] pod "kube-controller-manager-addons-207808" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:17.451712   13904 pod_ready.go:82] duration metric: took 12.312718ms for pod "kube-controller-manager-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.451734   13904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xmvv" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.453840   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:17.457782   13904 pod_ready.go:93] pod "kube-proxy-2xmvv" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:17.457801   13904 pod_ready.go:82] duration metric: took 6.053061ms for pod "kube-proxy-2xmvv" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.457821   13904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.545776   13904 pod_ready.go:93] pod "kube-scheduler-addons-207808" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:17.545801   13904 pod_ready.go:82] duration metric: took 87.972704ms for pod "kube-scheduler-addons-207808" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.545824   13904 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.949464   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:18.447795   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:18.948000   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:19.448900   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:19.644304   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:19.949266   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:20.149915   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.598727194s)
	I0912 21:30:20.150005   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.594042737s)
	I0912 21:30:20.150072   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.517497989s)
	I0912 21:30:20.150347   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.317948181s)
	I0912 21:30:20.150462   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.114026793s)
	I0912 21:30:20.150695   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.797802569s)
	I0912 21:30:20.150772   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.399665414s)
	I0912 21:30:20.150817   13904 addons.go:475] Verifying addon registry=true in "addons-207808"
	I0912 21:30:20.151243   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.109515385s)
	I0912 21:30:20.151527   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.010765773s)
	W0912 21:30:20.151564   13904 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:20.151584   13904 retry.go:31] will retry after 155.618092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:20.151695   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.413394096s)
	I0912 21:30:20.151764   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.418302916s)
	I0912 21:30:20.151788   13904 addons.go:475] Verifying addon metrics-server=true in "addons-207808"
	I0912 21:30:20.153302   13904 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-207808 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:30:20.153383   13904 out.go:177] * Verifying registry addon...
	I0912 21:30:20.156421   13904 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:30:20.235688   13904 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:30:20.235728   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0912 21:30:20.240686   13904 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0912 21:30:20.308108   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:20.448654   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:20.659877   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:20.947920   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:21.160682   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:21.452180   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:21.654933   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.203187167s)
	I0912 21:30:21.655059   13904 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-207808"
	I0912 21:30:21.655104   13904 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.887972523s)
	I0912 21:30:21.656354   13904 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:21.656358   13904 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:30:21.660598   13904 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:30:21.661248   13904 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:30:21.661967   13904 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:21.661985   13904 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:30:21.663689   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:21.741800   13904 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:30:21.741891   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:21.753017   13904 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:21.753047   13904 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:30:21.840422   13904 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:21.840447   13904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:30:21.862292   13904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:21.947774   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:22.054161   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:22.160775   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:22.233496   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:22.449255   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:22.733609   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:22.735827   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:22.741754   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.433590338s)
	I0912 21:30:22.947946   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:23.160545   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:23.161961   13904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.299631456s)
	I0912 21:30:23.163570   13904 addons.go:475] Verifying addon gcp-auth=true in "addons-207808"
	I0912 21:30:23.165096   13904 out.go:177] * Verifying gcp-auth addon...
	I0912 21:30:23.167335   13904 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:30:23.260081   13904 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:30:23.261458   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:23.447131   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:23.660613   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:23.665744   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:23.946815   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:24.160059   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:24.164984   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:24.447754   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:24.551924   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:24.660085   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:24.664952   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:24.948454   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:25.160188   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:25.165199   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:25.447595   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:25.660038   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:25.665049   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:25.947840   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:26.159978   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:26.261664   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:26.448235   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:26.660356   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:26.665050   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:26.947800   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:27.051442   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:27.160318   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:27.165177   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:27.448086   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:27.659735   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:27.664619   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:27.947381   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:28.159346   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:28.164714   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:28.447018   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:28.659741   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:28.664185   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:28.947758   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:29.051644   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:29.160155   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:29.164774   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:29.447454   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:29.659647   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:29.665292   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:29.947716   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:30.160127   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:30.166468   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:30.447841   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:30.660915   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:30.665244   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:30.948260   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:31.160507   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:31.165986   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:31.448229   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:31.551785   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:31.659644   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:31.666011   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:31.948914   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:32.160511   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:32.165258   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:32.447119   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:32.660275   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:32.664890   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:32.947478   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:33.160218   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:33.166234   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:33.447375   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:33.659899   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:33.664821   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:33.947232   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:34.050941   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:34.159975   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:34.164605   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:34.447165   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:34.659870   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:34.664944   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:34.948013   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:35.160047   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:35.164972   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:35.448373   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:35.659879   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:35.665766   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:35.948430   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:36.052145   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:36.160052   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:36.165199   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:36.449655   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:36.660268   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:36.664951   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:36.948535   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:37.160535   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:37.165956   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:37.447626   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:37.659739   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:37.665888   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:37.948186   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:38.160115   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:38.165217   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:38.448609   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:38.552072   13904 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:38.660386   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:38.665659   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:38.947114   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:39.159376   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:39.164933   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:39.447487   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:39.660199   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:39.665022   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:39.947935   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:40.159915   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:40.164757   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:40.446645   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:40.551089   13904 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:40.551116   13904 pod_ready.go:82] duration metric: took 23.005278837s for pod "nvidia-device-plugin-daemonset-mc6cs" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:40.551127   13904 pod_ready.go:39] duration metric: took 29.312842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:40.551150   13904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:30:40.551215   13904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:40.564545   13904 api_server.go:72] duration metric: took 32.757307666s to wait for apiserver process to appear ...
	I0912 21:30:40.564569   13904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:30:40.564585   13904 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:30:40.568868   13904 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:30:40.569599   13904 api_server.go:141] control plane version: v1.31.1
	I0912 21:30:40.569620   13904 api_server.go:131] duration metric: took 5.045542ms to wait for apiserver health ...
	I0912 21:30:40.569627   13904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:30:40.576038   13904 system_pods.go:59] 18 kube-system pods found
	I0912 21:30:40.576067   13904 system_pods.go:61] "coredns-7c65d6cfc9-nqb66" [44c5fa36-5441-48c7-a7bd-8e2d821c77c0] Running
	I0912 21:30:40.576075   13904 system_pods.go:61] "csi-hostpath-attacher-0" [f6815d37-6a4d-44f0-a067-d649e3a441a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:40.576080   13904 system_pods.go:61] "csi-hostpath-resizer-0" [00aca749-1720-46ee-8e3d-37d6ff6aabfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:40.576088   13904 system_pods.go:61] "csi-hostpathplugin-5dpdr" [79114267-b5df-4335-a1bc-43b76311472c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:40.576093   13904 system_pods.go:61] "etcd-addons-207808" [55583cad-3793-4a65-b549-341872f500f2] Running
	I0912 21:30:40.576097   13904 system_pods.go:61] "kube-apiserver-addons-207808" [62f90147-f7b6-4a55-98e3-e6c6c657bb9f] Running
	I0912 21:30:40.576101   13904 system_pods.go:61] "kube-controller-manager-addons-207808" [770fb5d8-a95f-4c79-8890-b4b3967d8ba0] Running
	I0912 21:30:40.576104   13904 system_pods.go:61] "kube-ingress-dns-minikube" [13cca3f9-c8f2-4cc9-8605-5d8961e06c0c] Running
	I0912 21:30:40.576107   13904 system_pods.go:61] "kube-proxy-2xmvv" [82d22286-ca1b-4a37-88ea-a0dc0c1fa9fd] Running
	I0912 21:30:40.576111   13904 system_pods.go:61] "kube-scheduler-addons-207808" [7e3b3ace-ac55-4804-b7c9-819dc64a505f] Running
	I0912 21:30:40.576115   13904 system_pods.go:61] "metrics-server-84c5f94fbc-qp9pj" [467286ab-a1a8-4e01-aef7-f92c567162ba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:40.576121   13904 system_pods.go:61] "nvidia-device-plugin-daemonset-mc6cs" [1c6b255b-a9a3-49d2-9fac-3dee50123ecc] Running
	I0912 21:30:40.576127   13904 system_pods.go:61] "registry-66c9cd494c-mdbsb" [6646693e-e468-4f8c-a209-9f028e31da67] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:40.576135   13904 system_pods.go:61] "registry-proxy-fjxbz" [6340cd55-7e16-4315-8b01-5e879a2b0d76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:40.576142   13904 system_pods.go:61] "snapshot-controller-56fcc65765-lc6mh" [75d1061f-fc5b-42bc-a091-c587ce534a9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:40.576149   13904 system_pods.go:61] "snapshot-controller-56fcc65765-tczjb" [f4c0d99d-69a7-411d-bd82-833a4a9dc9a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:40.576155   13904 system_pods.go:61] "storage-provisioner" [62c01ea5-9b66-45e4-9e9f-1ab26c0298a2] Running
	I0912 21:30:40.576161   13904 system_pods.go:61] "tiller-deploy-b48cc5f79-lnb7p" [7df8afba-a05c-403e-a96c-3556b198e183] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:40.576168   13904 system_pods.go:74] duration metric: took 6.536762ms to wait for pod list to return data ...
	I0912 21:30:40.576178   13904 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:30:40.577942   13904 default_sa.go:45] found service account: "default"
	I0912 21:30:40.577959   13904 default_sa.go:55] duration metric: took 1.774448ms for default service account to be created ...
	I0912 21:30:40.577967   13904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:30:40.584402   13904 system_pods.go:86] 18 kube-system pods found
	I0912 21:30:40.584426   13904 system_pods.go:89] "coredns-7c65d6cfc9-nqb66" [44c5fa36-5441-48c7-a7bd-8e2d821c77c0] Running
	I0912 21:30:40.584434   13904 system_pods.go:89] "csi-hostpath-attacher-0" [f6815d37-6a4d-44f0-a067-d649e3a441a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:40.584442   13904 system_pods.go:89] "csi-hostpath-resizer-0" [00aca749-1720-46ee-8e3d-37d6ff6aabfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:40.584451   13904 system_pods.go:89] "csi-hostpathplugin-5dpdr" [79114267-b5df-4335-a1bc-43b76311472c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:40.584455   13904 system_pods.go:89] "etcd-addons-207808" [55583cad-3793-4a65-b549-341872f500f2] Running
	I0912 21:30:40.584461   13904 system_pods.go:89] "kube-apiserver-addons-207808" [62f90147-f7b6-4a55-98e3-e6c6c657bb9f] Running
	I0912 21:30:40.584466   13904 system_pods.go:89] "kube-controller-manager-addons-207808" [770fb5d8-a95f-4c79-8890-b4b3967d8ba0] Running
	I0912 21:30:40.584471   13904 system_pods.go:89] "kube-ingress-dns-minikube" [13cca3f9-c8f2-4cc9-8605-5d8961e06c0c] Running
	I0912 21:30:40.584474   13904 system_pods.go:89] "kube-proxy-2xmvv" [82d22286-ca1b-4a37-88ea-a0dc0c1fa9fd] Running
	I0912 21:30:40.584478   13904 system_pods.go:89] "kube-scheduler-addons-207808" [7e3b3ace-ac55-4804-b7c9-819dc64a505f] Running
	I0912 21:30:40.584485   13904 system_pods.go:89] "metrics-server-84c5f94fbc-qp9pj" [467286ab-a1a8-4e01-aef7-f92c567162ba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:40.584489   13904 system_pods.go:89] "nvidia-device-plugin-daemonset-mc6cs" [1c6b255b-a9a3-49d2-9fac-3dee50123ecc] Running
	I0912 21:30:40.584497   13904 system_pods.go:89] "registry-66c9cd494c-mdbsb" [6646693e-e468-4f8c-a209-9f028e31da67] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:40.584504   13904 system_pods.go:89] "registry-proxy-fjxbz" [6340cd55-7e16-4315-8b01-5e879a2b0d76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:40.584512   13904 system_pods.go:89] "snapshot-controller-56fcc65765-lc6mh" [75d1061f-fc5b-42bc-a091-c587ce534a9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:40.584518   13904 system_pods.go:89] "snapshot-controller-56fcc65765-tczjb" [f4c0d99d-69a7-411d-bd82-833a4a9dc9a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:40.584524   13904 system_pods.go:89] "storage-provisioner" [62c01ea5-9b66-45e4-9e9f-1ab26c0298a2] Running
	I0912 21:30:40.584531   13904 system_pods.go:89] "tiller-deploy-b48cc5f79-lnb7p" [7df8afba-a05c-403e-a96c-3556b198e183] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:40.584540   13904 system_pods.go:126] duration metric: took 6.568081ms to wait for k8s-apps to be running ...
	I0912 21:30:40.584548   13904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:30:40.584587   13904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:30:40.595535   13904 system_svc.go:56] duration metric: took 10.97749ms WaitForService to wait for kubelet
	I0912 21:30:40.595564   13904 kubeadm.go:582] duration metric: took 32.788327796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:40.595586   13904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:30:40.598377   13904 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 21:30:40.598400   13904 node_conditions.go:123] node cpu capacity is 8
	I0912 21:30:40.598412   13904 node_conditions.go:105] duration metric: took 2.821948ms to run NodePressure ...
	I0912 21:30:40.598423   13904 start.go:241] waiting for startup goroutines ...
	I0912 21:30:40.659881   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:40.666903   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:40.947605   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:41.160323   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:41.165222   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:41.447099   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:41.659469   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:41.665630   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:41.947973   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.160688   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.165715   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.447275   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.659736   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.665903   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.947755   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.161041   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.164986   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.447649   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.659984   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.664792   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.947048   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.160589   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.165201   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.447733   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.659815   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.664978   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.947222   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:45.160020   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.164822   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.447503   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:45.659636   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.665385   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.947634   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.159848   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.165311   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.447981   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.659658   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.665225   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.948001   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.160123   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:47.164745   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.448312   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.659830   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:47.664969   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.948241   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:48.159374   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.165844   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.447122   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:48.659854   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.665473   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.947634   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.159931   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.164920   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.448060   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.660720   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.666057   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.947771   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:50.228658   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.229173   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.448399   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:50.660099   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.664684   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.947600   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.160337   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.165719   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.447592   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.659863   13904 kapi.go:107] duration metric: took 31.503437258s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:30:51.664880   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.947586   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.165571   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.448265   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.665878   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.947298   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.165119   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.447785   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.666609   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.947629   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.166123   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.447977   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.666094   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.947244   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.165021   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.448257   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.664711   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.947476   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.166526   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.448650   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.666252   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.948523   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.166291   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.447780   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.666084   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.948401   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.165936   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.447558   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.665968   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.953178   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.165310   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.448175   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.665585   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.947281   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:00.165965   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.450768   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:00.666303   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.947818   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.166482   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.447446   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.665844   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.947302   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.165094   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.447921   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.666262   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.948207   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.166246   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.448027   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.666604   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.947791   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.166495   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.447472   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.665618   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.947708   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.166284   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.447016   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.664924   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.947625   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.165914   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.447756   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.666707   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.948213   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.166481   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.447660   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.666042   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.948893   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.166206   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.448068   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.666661   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.947701   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.165825   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.447918   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.666559   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.947213   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.165677   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.447390   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.666303   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.947467   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.166447   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.447651   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.666650   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.948675   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.166424   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.447357   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.665383   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.948079   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.166897   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.448474   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.665744   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.947985   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.165753   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.448718   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.665611   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.948568   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.165693   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.447695   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.665813   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.947838   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.166934   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.447387   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.665915   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.947940   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:17.166407   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.447963   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:17.665187   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.947195   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.164674   13904 kapi.go:107] duration metric: took 56.503424612s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:31:18.447140   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.947219   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.447617   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.946901   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.448110   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.947280   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.448340   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.948960   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.447861   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.948709   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.448294   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.947640   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.447243   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.947949   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.583873   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.948214   13904 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.447710   13904 kapi.go:107] duration metric: took 1m9.504245832s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:31:46.173201   13904 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:31:46.173232   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.670456   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.170522   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.670101   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.170271   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.670604   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.170459   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.670439   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.170226   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.670944   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.170943   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.670810   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.171348   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.669875   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.171954   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.670753   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.170226   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.671055   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:55.170472   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:55.670659   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.170435   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.670749   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.170329   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.669848   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.171511   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.670025   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.170491   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.671119   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.170627   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.670721   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:01.170511   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:01.670667   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:02.170913   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:02.671054   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:03.173204   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:03.670852   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:04.170848   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:04.670920   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:05.170756   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:05.670898   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:06.171041   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:06.670185   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:07.170559   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:07.670465   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:08.170073   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:08.670545   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:09.170700   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:09.670647   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:10.170715   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:10.670581   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:11.170533   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:11.670708   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:12.170717   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:12.670471   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:13.169967   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:13.671087   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:14.171012   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:14.670632   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:15.170655   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:15.670651   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:16.170159   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:16.670410   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:17.170242   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:17.670909   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:18.171075   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:18.670699   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:19.170445   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:19.670364   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:20.169838   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:20.670889   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:21.171015   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:21.671209   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:22.170780   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:22.670909   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:23.170400   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:23.670358   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:24.169929   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:24.670662   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:25.170685   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:25.670298   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:26.169796   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:26.671171   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:27.170859   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:27.670719   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:28.170438   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:28.670156   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:29.170262   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:29.671181   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:30.170705   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:30.670792   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:31.170924   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:31.671207   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:32.170842   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:32.670890   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:33.170733   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:33.671013   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:34.170473   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:34.670252   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:35.171137   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:35.670943   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:36.170487   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:36.670605   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:37.170412   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:37.670929   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:38.170486   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:38.670138   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:39.170514   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:39.670690   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:40.170114   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:40.670954   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:41.171321   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:41.670253   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:42.170903   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:42.670885   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:43.170881   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:43.670654   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:44.170210   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:44.670572   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:45.170045   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:45.671048   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:46.170763   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:46.670759   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:47.170711   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:47.671063   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:48.170722   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:48.670606   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:49.170502   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:49.670499   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:50.170308   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:50.670315   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:51.170160   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:51.670372   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:52.170447   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:52.670234   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:53.170924   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:53.671151   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:54.170599   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:54.670269   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:55.170452   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:55.670831   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:56.170438   13904 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:56.670610   13904 kapi.go:107] duration metric: took 2m33.503269793s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:32:56.672262   13904 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-207808 cluster.
	I0912 21:32:56.673512   13904 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:32:56.674932   13904 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:32:56.676163   13904 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, volcano, nvidia-device-plugin, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 21:32:56.677537   13904 addons.go:510] duration metric: took 2m48.870266282s for enable addons: enabled=[ingress-dns cloud-spanner volcano nvidia-device-plugin storage-provisioner helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 21:32:56.677586   13904 start.go:246] waiting for cluster config update ...
	I0912 21:32:56.677613   13904 start.go:255] writing updated cluster config ...
	I0912 21:32:56.677871   13904 ssh_runner.go:195] Run: rm -f paused
	I0912 21:32:56.725486   13904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:32:56.727190   13904 out.go:177] * Done! kubectl is now configured to use "addons-207808" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 12 21:42:23 addons-207808 dockerd[1336]: time="2024-09-12T21:42:23.151871292Z" level=info msg="ignoring event" container=a3638bd0a4280dfe885223d87d8e50f9f34928cba99225ae06528c31797152aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:23 addons-207808 dockerd[1336]: time="2024-09-12T21:42:23.343464854Z" level=info msg="ignoring event" container=60c1923675e82074f18d0f835a978ad7f4b377abb04e8f796bffc47becf90dba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:23 addons-207808 dockerd[1336]: time="2024-09-12T21:42:23.364492132Z" level=info msg="ignoring event" container=2054bb549cfe8fd31558100a32c3f65bb88ea53eecb94f25fc2b9a14747d9348 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:25 addons-207808 dockerd[1336]: time="2024-09-12T21:42:25.477309282Z" level=info msg="ignoring event" container=90b12721628d414d615c930f90b560e234d24af0ce198bf3e857768ded6f68a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:29 addons-207808 cri-dockerd[1600]: time="2024-09-12T21:42:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6e7db378c3cd3f2e8eed4ae53adafc8d2e09d5b28e3548c239883be504d15c8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 12 21:42:29 addons-207808 dockerd[1336]: time="2024-09-12T21:42:29.769894325Z" level=info msg="ignoring event" container=f516a09ad09165c94a45d23bdb0a5e9fde77a4dbec47ae5fde8af52ae1642f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:29 addons-207808 dockerd[1336]: time="2024-09-12T21:42:29.892554753Z" level=info msg="ignoring event" container=abdbea8991fa2d3b8d4dbd4ba546e7f268a8b880a0c5296d869191ab5a46d5c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:31 addons-207808 dockerd[1336]: time="2024-09-12T21:42:31.881886899Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=5f3b0a50b0d1d081bd93e083f67a509be98935e222df3105b9ec18d61793c6eb
	Sep 12 21:42:31 addons-207808 dockerd[1336]: time="2024-09-12T21:42:31.902803627Z" level=info msg="ignoring event" container=5f3b0a50b0d1d081bd93e083f67a509be98935e222df3105b9ec18d61793c6eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:32 addons-207808 dockerd[1336]: time="2024-09-12T21:42:32.024801355Z" level=info msg="ignoring event" container=faad0b94ce6fd08c1404da42fa4a1d9481c764f556b9757a64b282e31887f092 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:32 addons-207808 cri-dockerd[1600]: time="2024-09-12T21:42:32Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 12 21:42:34 addons-207808 dockerd[1336]: time="2024-09-12T21:42:34.052977877Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:42:34 addons-207808 dockerd[1336]: time="2024-09-12T21:42:34.055137119Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:42:41 addons-207808 cri-dockerd[1600]: time="2024-09-12T21:42:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/021b0712666e2706de5ceba9945caa927e89c67b2e46a6342a45a3bb2ca68abc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 12 21:42:41 addons-207808 dockerd[1336]: time="2024-09-12T21:42:41.309649014Z" level=info msg="ignoring event" container=310f6cda8651a5aee0c264a1c549339bf23995fc7c5d6de645d9fa9104114f40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:41 addons-207808 dockerd[1336]: time="2024-09-12T21:42:41.356039684Z" level=info msg="ignoring event" container=85ba7338cac4953a48d8cc78e30f27f60ae1d576a7b59f695add8592b0a24286 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:42 addons-207808 cri-dockerd[1600]: time="2024-09-12T21:42:42Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 12 21:42:45 addons-207808 dockerd[1336]: time="2024-09-12T21:42:45.959507118Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f
	Sep 12 21:42:46 addons-207808 dockerd[1336]: time="2024-09-12T21:42:46.020511789Z" level=info msg="ignoring event" container=e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:46 addons-207808 dockerd[1336]: time="2024-09-12T21:42:46.181863222Z" level=info msg="ignoring event" container=72fec6cdaefabc07f057ae112d72525688238a33fbbbd4719aae57249b6ca97e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:50 addons-207808 dockerd[1336]: time="2024-09-12T21:42:50.087674267Z" level=info msg="ignoring event" container=3b12d368af1ba6e620fed65557d117417791262f1f1af8bd50ce5e877eb5b1f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:50 addons-207808 dockerd[1336]: time="2024-09-12T21:42:50.549503774Z" level=info msg="ignoring event" container=67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:50 addons-207808 dockerd[1336]: time="2024-09-12T21:42:50.607642926Z" level=info msg="ignoring event" container=7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:50 addons-207808 dockerd[1336]: time="2024-09-12T21:42:50.679677069Z" level=info msg="ignoring event" container=3be73be1866a200295385c874a5d80efdf5f89162f040328319efcc382140840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:42:50 addons-207808 dockerd[1336]: time="2024-09-12T21:42:50.768454694Z" level=info msg="ignoring event" container=78a5a06d4a04940896365e6b7e1e5f470f743c014d0d5a5aca3be14ebb623799 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aac269b22493d       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  9 seconds ago       Running             hello-world-app           0                   021b0712666e2       hello-world-app-55bf9c44b4-mjcpw
	81bd03845048c       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                19 seconds ago      Running             nginx                     0                   b6e7db378c3cd       nginx
	07a6fb1324f97       a416a98b71e22                                                                                                                50 seconds ago      Exited              helper-pod                0                   de26f9f5bcc4b       helper-pod-delete-pvc-b1ba2409-c488-4cdf-b0b8-4d252d606c73
	56a330f7d94aa       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   81f5fd67c17a0       gcp-auth-89d5ffd79-mhh85
	dffc69edded62       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   9d4dc462cdc00       ingress-nginx-admission-patch-ns57n
	29450d646ef4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   96574fa220383       ingress-nginx-admission-create-t9v69
	7a19ac2f77504       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   78a5a06d4a049       registry-proxy-fjxbz
	67d8edc63cc40       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   3be73be1866a2       registry-66c9cd494c-mdbsb
	fa068715e0b78       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   aa2f447a14793       storage-provisioner
	3b169979e5097       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   dd0b54a1e4b9f       coredns-7c65d6cfc9-nqb66
	659c75feb9a77       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   8f2dd52e26f96       kube-proxy-2xmvv
	8979ec8fc868f       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   702e999f2e086       kube-scheduler-addons-207808
	c068506a2ce86       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   15f6418116d27       etcd-addons-207808
	7608d9143ec64       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   6649c16b423ab       kube-controller-manager-addons-207808
	1bac8a599bc4b       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   08079b0f3308d       kube-apiserver-addons-207808
	
	
	==> coredns [3b169979e509] <==
	[INFO] 10.244.0.22:45721 - 29979 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00311285s
	[INFO] 10.244.0.22:51640 - 25692 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005193991s
	[INFO] 10.244.0.22:46983 - 26825 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005229929s
	[INFO] 10.244.0.22:56173 - 2295 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.0058611s
	[INFO] 10.244.0.22:46102 - 51476 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005440425s
	[INFO] 10.244.0.22:40390 - 58896 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005441496s
	[INFO] 10.244.0.22:45721 - 22303 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005506211s
	[INFO] 10.244.0.22:55982 - 25996 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005636843s
	[INFO] 10.244.0.22:44762 - 13853 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00568787s
	[INFO] 10.244.0.22:51640 - 41022 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003879842s
	[INFO] 10.244.0.22:44762 - 4340 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003509116s
	[INFO] 10.244.0.22:45721 - 49477 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003730443s
	[INFO] 10.244.0.22:40390 - 22932 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003755494s
	[INFO] 10.244.0.22:46102 - 35464 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003861806s
	[INFO] 10.244.0.22:56173 - 4643 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003804222s
	[INFO] 10.244.0.22:55982 - 1235 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003749399s
	[INFO] 10.244.0.22:46983 - 9553 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004133971s
	[INFO] 10.244.0.22:44762 - 51125 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063336s
	[INFO] 10.244.0.22:45721 - 10639 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052434s
	[INFO] 10.244.0.22:40390 - 27164 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060738s
	[INFO] 10.244.0.22:56173 - 40027 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00012687s
	[INFO] 10.244.0.22:46102 - 20721 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000206803s
	[INFO] 10.244.0.22:51640 - 26586 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000236518s
	[INFO] 10.244.0.22:46983 - 61600 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00016543s
	[INFO] 10.244.0.22:55982 - 47647 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000191203s
	
	
	==> describe nodes <==
	Name:               addons-207808
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-207808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-207808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_30_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-207808
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:29:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-207808
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:42:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:42:36 +0000   Thu, 12 Sep 2024 21:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:42:36 +0000   Thu, 12 Sep 2024 21:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:42:36 +0000   Thu, 12 Sep 2024 21:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:42:36 +0000   Thu, 12 Sep 2024 21:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-207808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1003a0e88b5347198be46e2083b504f7
	  System UUID:                69c6a5ac-901a-4554-9494-158d4279ef9e
	  Boot ID:                    178756ce-17ec-4b96-b240-8a8b9997ee1b
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-mjcpw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  gcp-auth                    gcp-auth-89d5ffd79-mhh85                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-nqb66                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-207808                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-207808             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-207808    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2xmvv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-207808             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-207808 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-207808 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-207808 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-207808 event: Registered Node addons-207808 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 42 d8 3f c2 55 08 06
	[  +6.106396] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 b8 43 61 01 1a 08 06
	[  +0.088573] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cb 23 95 1a 12 08 06
	[  +0.102869] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 02 6d 20 66 41 08 06
	[ +10.410130] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 1c 16 68 53 f6 08 06
	[  +1.032910] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 67 34 69 3d 1b 08 06
	[Sep12 21:32] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 7b 61 91 0f 62 08 06
	[  +0.042796] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 79 8e c9 48 49 08 06
	[ +29.256272] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 5a 94 2c 58 d2 08 06
	[  +0.000427] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ca ab 1b 6a f4 9b 08 06
	[Sep12 21:41] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 1c ad 3c 84 b4 08 06
	[Sep12 21:42] IPv4: martian source 10.244.0.37 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 1c 16 68 53 f6 08 06
	[  +1.627498] IPv4: martian source 10.244.0.22 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca ab 1b 6a f4 9b 08 06
	
	
	==> etcd [c068506a2ce8] <==
	{"level":"info","ts":"2024-09-12T21:29:57.948350Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T21:29:57.948362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-12T21:29:57.948376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T21:29:57.949208Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:29:57.949742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:29:57.949740Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-207808 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T21:29:57.949767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:29:57.950019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:29:57.950026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T21:29:57.950077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T21:29:57.950107Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:29:57.950140Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:29:57.951036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:29:57.951047Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:29:57.952238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T21:29:57.952240Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-12T21:30:16.033998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.596986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-207808\" ","response":"range_response_count:1 size:4404"}
	{"level":"info","ts":"2024-09-12T21:30:16.034082Z","caller":"traceutil/trace.go:171","msg":"trace[1030330109] range","detail":"{range_begin:/registry/minions/addons-207808; range_end:; response_count:1; response_revision:619; }","duration":"102.698267ms","start":"2024-09-12T21:30:15.931366Z","end":"2024-09-12T21:30:16.034064Z","steps":["trace[1030330109] 'range keys from in-memory index tree'  (duration: 102.473609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:31:25.581068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.304322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:31:25.581140Z","caller":"traceutil/trace.go:171","msg":"trace[466196732] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1238; }","duration":"136.410983ms","start":"2024-09-12T21:31:25.444715Z","end":"2024-09-12T21:31:25.581126Z","steps":["trace[466196732] 'range keys from in-memory index tree'  (duration: 136.247832ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:39:58.361661Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1870}
	{"level":"info","ts":"2024-09-12T21:39:58.385652Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1870,"took":"23.470892ms","hash":984981064,"current-db-size-bytes":8781824,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4980736,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-12T21:39:58.385701Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":984981064,"revision":1870,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T21:42:05.754232Z","caller":"traceutil/trace.go:171","msg":"trace[696321980] transaction","detail":"{read_only:false; response_revision:2680; number_of_response:1; }","duration":"113.566169ms","start":"2024-09-12T21:42:05.640642Z","end":"2024-09-12T21:42:05.754208Z","steps":["trace[696321980] 'process raft request'  (duration: 50.149388ms)","trace[696321980] 'compare'  (duration: 63.318473ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:42:05.754357Z","caller":"traceutil/trace.go:171","msg":"trace[407531068] transaction","detail":"{read_only:false; response_revision:2681; number_of_response:1; }","duration":"113.621462ms","start":"2024-09-12T21:42:05.640720Z","end":"2024-09-12T21:42:05.754341Z","steps":["trace[407531068] 'process raft request'  (duration: 113.557768ms)"],"step_count":1}
	
	
	==> gcp-auth [56a330f7d94a] <==
	2024/09/12 21:33:36 Ready to write response ...
	2024/09/12 21:41:44 Ready to marshal response ...
	2024/09/12 21:41:44 Ready to write response ...
	2024/09/12 21:41:45 Ready to marshal response ...
	2024/09/12 21:41:45 Ready to write response ...
	2024/09/12 21:41:49 Ready to marshal response ...
	2024/09/12 21:41:49 Ready to write response ...
	2024/09/12 21:41:50 Ready to marshal response ...
	2024/09/12 21:41:50 Ready to write response ...
	2024/09/12 21:41:50 Ready to marshal response ...
	2024/09/12 21:41:50 Ready to write response ...
	2024/09/12 21:42:01 Ready to marshal response ...
	2024/09/12 21:42:01 Ready to write response ...
	2024/09/12 21:42:02 Ready to marshal response ...
	2024/09/12 21:42:02 Ready to write response ...
	2024/09/12 21:42:02 Ready to marshal response ...
	2024/09/12 21:42:02 Ready to write response ...
	2024/09/12 21:42:02 Ready to marshal response ...
	2024/09/12 21:42:02 Ready to write response ...
	2024/09/12 21:42:06 Ready to marshal response ...
	2024/09/12 21:42:06 Ready to write response ...
	2024/09/12 21:42:29 Ready to marshal response ...
	2024/09/12 21:42:29 Ready to write response ...
	2024/09/12 21:42:40 Ready to marshal response ...
	2024/09/12 21:42:40 Ready to write response ...
	
	
	==> kernel <==
	 21:42:51 up 25 min,  0 users,  load average: 0.44, 0.36, 0.28
	Linux addons-207808 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [1bac8a599bc4] <==
	W0912 21:33:28.345638       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0912 21:33:28.464543       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0912 21:33:28.843497       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0912 21:33:29.151139       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0912 21:41:55.209924       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 21:42:02.355345       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.33.1"}
	E0912 21:42:17.250742       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0912 21:42:22.994417       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:42:22.994472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:42:23.008396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:42:23.008444       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:42:23.009381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:42:23.009427       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:42:23.019779       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:42:23.019814       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:42:23.045599       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:42:23.045642       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:42:24.009949       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:42:24.046072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0912 21:42:24.049826       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0912 21:42:25.391575       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:42:26.448801       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:42:29.047845       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:42:29.214076       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.134.100"}
	I0912 21:42:40.689186       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.212.56"}
	
	
	==> kube-controller-manager [7608d9143ec6] <==
	I0912 21:42:36.848097       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0912 21:42:36.848131       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 21:42:37.169205       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0912 21:42:37.169244       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 21:42:40.572632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.610349ms"
	I0912 21:42:40.576910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.229949ms"
	I0912 21:42:40.576983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.409µs"
	I0912 21:42:40.581057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.573µs"
	W0912 21:42:41.908489       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:41.908529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:42:42.937561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.305µs"
	I0912 21:42:42.937637       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0912 21:42:42.941314       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0912 21:42:43.436926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.320895ms"
	I0912 21:42:43.437012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.484µs"
	W0912 21:42:43.703684       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:43.703727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:44.188019       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:44.188056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:45.688457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:45.688497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:47.838843       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:47.838888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:42:49.449098       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0912 21:42:50.504164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.122µs"
	
	
	==> kube-proxy [659c75feb9a7] <==
	I0912 21:30:07.674840       1 server_linux.go:66] "Using iptables proxy"
	I0912 21:30:07.798673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0912 21:30:07.798752       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:30:07.854328       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:30:07.854408       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:30:07.857134       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:30:07.857672       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:30:07.857702       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:30:07.859941       1 config.go:199] "Starting service config controller"
	I0912 21:30:07.859958       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:30:07.859981       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:30:07.859986       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:30:07.860026       1 config.go:328] "Starting node config controller"
	I0912 21:30:07.860037       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:30:07.960502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:30:07.960585       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:30:07.964054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8979ec8fc868] <==
	W0912 21:29:59.738838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:29:59.739414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:59.739056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:29:59.739476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:59.738782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:29:59.739538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.550471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:30:00.550507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.570778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:00.570814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.617183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:30:00.617232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.664215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:30:00.664256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.668424       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:30:00.668455       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:30:00.727637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:30:00.727686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.748179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:00.748225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.810853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:00.810900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:00.820258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:30:00.820303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0912 21:30:03.835537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:42:46 addons-207808 kubelet[2440]: E0912 21:42:46.479356    2440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f" containerID="e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f"
	Sep 12 21:42:46 addons-207808 kubelet[2440]: I0912 21:42:46.479417    2440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f"} err="failed to get container status \"e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f\": rpc error: code = Unknown desc = Error response from daemon: No such container: e57e88a01fbdbd94e7819dce90ab02e75d11fa905be87d0653d02d2efd94258f"
	Sep 12 21:42:46 addons-207808 kubelet[2440]: E0912 21:42:46.935858    2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="ee4c74e8-2158-4d39-848e-8b76ed7122a8"
	Sep 12 21:42:47 addons-207808 kubelet[2440]: I0912 21:42:47.942275    2440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6102737-46d7-4392-a7cf-24f0e5cff364" path="/var/lib/kubelet/pods/b6102737-46d7-4392-a7cf-24f0e5cff364/volumes"
	Sep 12 21:42:48 addons-207808 kubelet[2440]: I0912 21:42:48.934106    2440 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fjxbz" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.245091    2440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ee4c74e8-2158-4d39-848e-8b76ed7122a8-gcp-creds\") pod \"ee4c74e8-2158-4d39-848e-8b76ed7122a8\" (UID: \"ee4c74e8-2158-4d39-848e-8b76ed7122a8\") "
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.245116    2440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee4c74e8-2158-4d39-848e-8b76ed7122a8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ee4c74e8-2158-4d39-848e-8b76ed7122a8" (UID: "ee4c74e8-2158-4d39-848e-8b76ed7122a8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.245158    2440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzxn7\" (UniqueName: \"kubernetes.io/projected/ee4c74e8-2158-4d39-848e-8b76ed7122a8-kube-api-access-zzxn7\") pod \"ee4c74e8-2158-4d39-848e-8b76ed7122a8\" (UID: \"ee4c74e8-2158-4d39-848e-8b76ed7122a8\") "
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.246866    2440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4c74e8-2158-4d39-848e-8b76ed7122a8-kube-api-access-zzxn7" (OuterVolumeSpecName: "kube-api-access-zzxn7") pod "ee4c74e8-2158-4d39-848e-8b76ed7122a8" (UID: "ee4c74e8-2158-4d39-848e-8b76ed7122a8"). InnerVolumeSpecName "kube-api-access-zzxn7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.345964    2440 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ee4c74e8-2158-4d39-848e-8b76ed7122a8-gcp-creds\") on node \"addons-207808\" DevicePath \"\""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.346009    2440 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zzxn7\" (UniqueName: \"kubernetes.io/projected/ee4c74e8-2158-4d39-848e-8b76ed7122a8-kube-api-access-zzxn7\") on node \"addons-207808\" DevicePath \"\""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.849367    2440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4jjx\" (UniqueName: \"kubernetes.io/projected/6646693e-e468-4f8c-a209-9f028e31da67-kube-api-access-k4jjx\") pod \"6646693e-e468-4f8c-a209-9f028e31da67\" (UID: \"6646693e-e468-4f8c-a209-9f028e31da67\") "
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.849461    2440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj9b\" (UniqueName: \"kubernetes.io/projected/6340cd55-7e16-4315-8b01-5e879a2b0d76-kube-api-access-hcj9b\") pod \"6340cd55-7e16-4315-8b01-5e879a2b0d76\" (UID: \"6340cd55-7e16-4315-8b01-5e879a2b0d76\") "
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.851179    2440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6646693e-e468-4f8c-a209-9f028e31da67-kube-api-access-k4jjx" (OuterVolumeSpecName: "kube-api-access-k4jjx") pod "6646693e-e468-4f8c-a209-9f028e31da67" (UID: "6646693e-e468-4f8c-a209-9f028e31da67"). InnerVolumeSpecName "kube-api-access-k4jjx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.851569    2440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6340cd55-7e16-4315-8b01-5e879a2b0d76-kube-api-access-hcj9b" (OuterVolumeSpecName: "kube-api-access-hcj9b") pod "6340cd55-7e16-4315-8b01-5e879a2b0d76" (UID: "6340cd55-7e16-4315-8b01-5e879a2b0d76"). InnerVolumeSpecName "kube-api-access-hcj9b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.950481    2440 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k4jjx\" (UniqueName: \"kubernetes.io/projected/6646693e-e468-4f8c-a209-9f028e31da67-kube-api-access-k4jjx\") on node \"addons-207808\" DevicePath \"\""
	Sep 12 21:42:50 addons-207808 kubelet[2440]: I0912 21:42:50.950514    2440 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hcj9b\" (UniqueName: \"kubernetes.io/projected/6340cd55-7e16-4315-8b01-5e879a2b0d76-kube-api-access-hcj9b\") on node \"addons-207808\" DevicePath \"\""
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.528251    2440 scope.go:117] "RemoveContainer" containerID="7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.545641    2440 scope.go:117] "RemoveContainer" containerID="7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: E0912 21:42:51.546426    2440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402" containerID="7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.546457    2440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402"} err="failed to get container status \"7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7a19ac2f77504a3fb6429ea0692fe0ffc3ecdd7809aa8bfe68b5df69bb727402"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.546489    2440 scope.go:117] "RemoveContainer" containerID="67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.560722    2440 scope.go:117] "RemoveContainer" containerID="67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: E0912 21:42:51.561504    2440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c" containerID="67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c"
	Sep 12 21:42:51 addons-207808 kubelet[2440]: I0912 21:42:51.561550    2440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c"} err="failed to get container status \"67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67d8edc63cc4087ea595b825b3ea2af0672849eee4ba88641e5ce19407f5d55c"
	
	
	==> storage-provisioner [fa068715e0b7] <==
	I0912 21:30:16.338008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:30:16.437525       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:30:16.437581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:30:16.456027       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:30:16.456229       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-207808_daf20f72-9563-41c4-adbd-36b6caaf2374!
	I0912 21:30:16.456289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"680a853e-cde1-40d3-94d4-b86f3e7c4972", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-207808_daf20f72-9563-41c4-adbd-36b6caaf2374 became leader
	I0912 21:30:16.631423       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-207808_daf20f72-9563-41c4-adbd-36b6caaf2374!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-207808 -n addons-207808
helpers_test.go:261: (dbg) Run:  kubectl --context addons-207808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-207808 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-207808 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-207808/192.168.49.2
	Start Time:       Thu, 12 Sep 2024 21:33:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kz9f9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kz9f9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-207808
	  Normal   Pulling    7m59s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m58s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m58s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.39s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 16
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.36
21 TestBinaryMirror 1.93
22 TestOffline 76.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 210.27
29 TestAddons/serial/Volcano 39.66
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.17
35 TestAddons/parallel/InspektorGadget 10.57
36 TestAddons/parallel/MetricsServer 5.55
37 TestAddons/parallel/HelmTiller 10.81
39 TestAddons/parallel/CSI 44.4
40 TestAddons/parallel/Headlamp 18.54
41 TestAddons/parallel/CloudSpanner 5.43
42 TestAddons/parallel/LocalPath 54.01
43 TestAddons/parallel/NvidiaDevicePlugin 6.39
44 TestAddons/parallel/Yakd 11.63
45 TestAddons/StoppedEnableDisable 11.19
46 TestCertOptions 27.74
47 TestCertExpiration 240.38
48 TestDockerFlags 29.19
49 TestForceSystemdFlag 35.07
50 TestForceSystemdEnv 25.64
52 TestKVMDriverInstallOrUpdate 3.8
56 TestErrorSpam/setup 24.17
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.12
60 TestErrorSpam/unpause 1.28
61 TestErrorSpam/stop 1.84
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 63.75
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.06
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.44
73 TestFunctional/serial/CacheCmd/cache/add_local 1.4
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 36.38
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.93
84 TestFunctional/serial/LogsFileCmd 0.94
85 TestFunctional/serial/InvalidService 4.66
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 19.31
89 TestFunctional/parallel/DryRun 0.39
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.01
95 TestFunctional/parallel/ServiceCmdConnect 14.66
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 38.41
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 1.58
101 TestFunctional/parallel/MySQL 25.64
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.55
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
111 TestFunctional/parallel/License 0.61
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.23
120 TestFunctional/parallel/ImageCommands/Setup 1.85
121 TestFunctional/parallel/DockerEnv/bash 0.96
122 TestFunctional/parallel/ProfileCmd/profile_list 0.37
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.86
133 TestFunctional/parallel/MountCmd/any-port 16.93
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.78
138 TestFunctional/parallel/Version/short 0.04
139 TestFunctional/parallel/Version/components 0.43
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/MountCmd/specific-port 2.08
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
148 TestFunctional/parallel/ServiceCmd/DeployApp 13.14
149 TestFunctional/parallel/ServiceCmd/List 1.67
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
152 TestFunctional/parallel/ServiceCmd/Format 0.49
153 TestFunctional/parallel/ServiceCmd/URL 0.49
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 101.82
161 TestMultiControlPlane/serial/DeployApp 5.4
162 TestMultiControlPlane/serial/PingHostFromPods 1.01
163 TestMultiControlPlane/serial/AddWorkerNode 20.29
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.62
166 TestMultiControlPlane/serial/CopyFile 15.21
167 TestMultiControlPlane/serial/StopSecondaryNode 11.42
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
169 TestMultiControlPlane/serial/RestartSecondaryNode 36.36
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.63
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 138.05
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.31
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
174 TestMultiControlPlane/serial/StopCluster 32.42
175 TestMultiControlPlane/serial/RestartCluster 100.68
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
177 TestMultiControlPlane/serial/AddSecondaryNode 37.32
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestImageBuild/serial/Setup 24.41
182 TestImageBuild/serial/NormalBuild 2.59
183 TestImageBuild/serial/BuildWithBuildArg 0.94
184 TestImageBuild/serial/BuildWithDockerIgnore 0.77
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
189 TestJSONOutput/start/Command 33.74
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.56
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.43
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.76
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 26.22
215 TestKicCustomNetwork/use_default_bridge_network 25.46
216 TestKicExistingNetwork 22.62
217 TestKicCustomSubnet 22.29
218 TestKicStaticIP 22.78
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 49.92
223 TestMountStart/serial/StartWithMountFirst 10.35
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 7.61
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.93
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 70.01
235 TestMultiNode/serial/DeployApp2Nodes 36.27
236 TestMultiNode/serial/PingHostFrom2Pods 0.69
237 TestMultiNode/serial/AddNode 16.29
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.27
240 TestMultiNode/serial/CopyFile 8.6
241 TestMultiNode/serial/StopNode 2.04
242 TestMultiNode/serial/StartAfterStop 9.54
243 TestMultiNode/serial/RestartKeepsNodes 109.81
244 TestMultiNode/serial/DeleteNode 5.18
245 TestMultiNode/serial/StopMultiNode 21.49
246 TestMultiNode/serial/RestartMultiNode 55.28
247 TestMultiNode/serial/ValidateNameConflict 22.29
252 TestPreload 98.44
254 TestScheduledStopUnix 97.08
255 TestSkaffold 102.86
257 TestInsufficientStorage 9.52
258 TestRunningBinaryUpgrade 75.65
260 TestKubernetesUpgrade 335.3
261 TestMissingContainerUpgrade 185.57
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 34.57
265 TestNoKubernetes/serial/StartWithStopK8s 17.54
266 TestNoKubernetes/serial/Start 6.55
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
268 TestNoKubernetes/serial/ProfileList 1.19
269 TestNoKubernetes/serial/Stop 1.17
270 TestNoKubernetes/serial/StartNoArgs 7.57
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
272 TestStoppedBinaryUpgrade/Setup 3.41
273 TestStoppedBinaryUpgrade/Upgrade 98.97
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
283 TestPause/serial/Start 70.59
296 TestStartStop/group/old-k8s-version/serial/FirstStart 130.52
297 TestPause/serial/SecondStartNoReconfiguration 35.11
299 TestStartStop/group/no-preload/serial/FirstStart 70.03
300 TestPause/serial/Pause 0.65
301 TestPause/serial/VerifyStatus 0.35
302 TestPause/serial/Unpause 0.49
303 TestPause/serial/PauseAgain 0.66
304 TestPause/serial/DeletePaused 2.19
305 TestPause/serial/VerifyDeletedResources 0.67
307 TestStartStop/group/embed-certs/serial/FirstStart 65.02
308 TestStartStop/group/no-preload/serial/DeployApp 10.26
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
310 TestStartStop/group/no-preload/serial/Stop 10.73
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 263.31
313 TestStartStop/group/embed-certs/serial/DeployApp 9.3
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.55
317 TestStartStop/group/embed-certs/serial/Stop 10.73
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
319 TestStartStop/group/embed-certs/serial/SecondStart 263.85
320 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
321 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
322 TestStartStop/group/old-k8s-version/serial/Stop 10.92
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
324 TestStartStop/group/old-k8s-version/serial/SecondStart 23.68
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 26.01
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.86
329 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 264.28
332 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
333 TestStartStop/group/old-k8s-version/serial/Pause 2.31
335 TestStartStop/group/newest-cni/serial/FirstStart 31.89
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
338 TestStartStop/group/newest-cni/serial/Stop 10.74
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
340 TestStartStop/group/newest-cni/serial/SecondStart 14.81
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
344 TestStartStop/group/newest-cni/serial/Pause 2.53
345 TestNetworkPlugins/group/auto/Start 68.62
346 TestNetworkPlugins/group/auto/KubeletFlags 0.25
347 TestNetworkPlugins/group/auto/NetCatPod 9.19
348 TestNetworkPlugins/group/auto/DNS 0.12
349 TestNetworkPlugins/group/auto/Localhost 0.13
350 TestNetworkPlugins/group/auto/HairPin 0.11
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
353 TestNetworkPlugins/group/kindnet/Start 60.66
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
355 TestStartStop/group/no-preload/serial/Pause 2.46
356 TestNetworkPlugins/group/calico/Start 58.24
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
360 TestStartStop/group/embed-certs/serial/Pause 2.73
361 TestNetworkPlugins/group/custom-flannel/Start 47.31
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
365 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
366 TestNetworkPlugins/group/calico/KubeletFlags 0.3
367 TestNetworkPlugins/group/calico/NetCatPod 10.19
368 TestNetworkPlugins/group/kindnet/DNS 0.15
369 TestNetworkPlugins/group/kindnet/Localhost 0.14
370 TestNetworkPlugins/group/kindnet/HairPin 0.13
371 TestNetworkPlugins/group/calico/DNS 0.14
372 TestNetworkPlugins/group/calico/Localhost 0.13
373 TestNetworkPlugins/group/calico/HairPin 0.12
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
376 TestNetworkPlugins/group/custom-flannel/DNS 0.16
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
379 TestNetworkPlugins/group/false/Start 66.52
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
382 TestNetworkPlugins/group/enable-default-cni/Start 64.77
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
385 TestNetworkPlugins/group/flannel/Start 46.97
386 TestNetworkPlugins/group/bridge/Start 41.56
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 9.2
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/false/KubeletFlags 0.25
391 TestNetworkPlugins/group/false/NetCatPod 9.17
392 TestNetworkPlugins/group/bridge/DNS 0.13
393 TestNetworkPlugins/group/bridge/Localhost 0.11
394 TestNetworkPlugins/group/bridge/HairPin 0.11
395 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
396 TestNetworkPlugins/group/flannel/NetCatPod 9.19
397 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
398 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
399 TestNetworkPlugins/group/false/DNS 0.15
400 TestNetworkPlugins/group/false/Localhost 0.11
401 TestNetworkPlugins/group/false/HairPin 0.13
402 TestNetworkPlugins/group/flannel/DNS 0.13
403 TestNetworkPlugins/group/flannel/Localhost 0.14
404 TestNetworkPlugins/group/flannel/HairPin 0.11
405 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
406 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
407 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
408 TestNetworkPlugins/group/kubenet/Start 40.71
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
410 TestNetworkPlugins/group/kubenet/NetCatPod 10.17
411 TestNetworkPlugins/group/kubenet/DNS 0.12
412 TestNetworkPlugins/group/kubenet/Localhost 0.1
413 TestNetworkPlugins/group/kubenet/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (17.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246170 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246170 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.125156268s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246170
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246170: exit status 85 (56.247379ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246170 | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |          |
	|         | -p download-only-246170        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:28:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:28:49.031210   12530 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:28:49.031298   12530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:49.031305   12530 out.go:358] Setting ErrFile to fd 2...
	I0912 21:28:49.031310   12530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:49.031492   12530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	W0912 21:28:49.031602   12530 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-5723/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-5723/.minikube/config/config.json: no such file or directory
	I0912 21:28:49.032135   12530 out.go:352] Setting JSON to true
	I0912 21:28:49.032966   12530 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":672,"bootTime":1726175857,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:28:49.033025   12530 start.go:139] virtualization: kvm guest
	I0912 21:28:49.035227   12530 out.go:97] [download-only-246170] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:28:49.035336   12530 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:28:49.035406   12530 notify.go:220] Checking for updates...
	I0912 21:28:49.036647   12530 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:28:49.037998   12530 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:28:49.039202   12530 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:28:49.040357   12530 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	I0912 21:28:49.041469   12530 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:28:49.043684   12530 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:28:49.043908   12530 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:28:49.064764   12530 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:28:49.064850   12530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:28:49.424090   12530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-12 21:28:49.415655955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:28:49.424204   12530 docker.go:318] overlay module found
	I0912 21:28:49.425913   12530 out.go:97] Using the docker driver based on user configuration
	I0912 21:28:49.425940   12530 start.go:297] selected driver: docker
	I0912 21:28:49.425951   12530 start.go:901] validating driver "docker" against <nil>
	I0912 21:28:49.426022   12530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:28:49.472067   12530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-12 21:28:49.463731259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:28:49.472224   12530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:28:49.472687   12530 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0912 21:28:49.472863   12530 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:28:49.474670   12530 out.go:169] Using Docker driver with root privileges
	I0912 21:28:49.475940   12530 cni.go:84] Creating CNI manager for ""
	I0912 21:28:49.475957   12530 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 21:28:49.476020   12530 start.go:340] cluster config:
	{Name:download-only-246170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-246170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:28:49.477353   12530 out.go:97] Starting "download-only-246170" primary control-plane node in "download-only-246170" cluster
	I0912 21:28:49.477367   12530 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:28:49.478617   12530 out.go:97] Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:28:49.478634   12530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:28:49.478738   12530 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:28:49.493832   12530 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:28:49.494005   12530 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:28:49.494112   12530 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:28:49.653145   12530 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0912 21:28:49.653187   12530 cache.go:56] Caching tarball of preloaded images
	I0912 21:28:49.653343   12530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:28:49.655244   12530 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 21:28:49.655259   12530 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:28:49.758078   12530 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0912 21:29:00.791613   12530 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:29:00.791702   12530 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:29:01.573834   12530 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 21:29:01.574147   12530 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/download-only-246170/config.json ...
	I0912 21:29:01.574176   12530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/download-only-246170/config.json: {Name:mk1aaab60404d07f95e8997097141e5329487121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:01.574318   12530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:29:01.574507   12530 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19616-5723/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-246170 host does not exist
	  To start a cluster, run: "minikube start -p download-only-246170"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-246170
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-998393 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-998393 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (16.000778759s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (16.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-998393
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-998393: exit status 85 (55.854146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-246170 | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p download-only-246170        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-246170        | download-only-246170 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | -o=json --download-only        | download-only-998393 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | -p download-only-998393        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:06
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:06.542306   12930 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:06.542551   12930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:06.542562   12930 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:06.542566   12930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:06.542759   12930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:29:06.543334   12930 out.go:352] Setting JSON to true
	I0912 21:29:06.544173   12930 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":690,"bootTime":1726175857,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:06.544232   12930 start.go:139] virtualization: kvm guest
	I0912 21:29:06.546241   12930 out.go:97] [download-only-998393] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:06.546379   12930 notify.go:220] Checking for updates...
	I0912 21:29:06.547905   12930 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:29:06.549317   12930 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:06.551096   12930 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:29:06.552363   12930 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	I0912 21:29:06.553707   12930 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:29:06.556597   12930 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:29:06.556796   12930 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:06.577298   12930 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:29:06.577381   12930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:29:06.621678   12930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-12 21:29:06.61308114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:29:06.621771   12930 docker.go:318] overlay module found
	I0912 21:29:06.623705   12930 out.go:97] Using the docker driver based on user configuration
	I0912 21:29:06.623730   12930 start.go:297] selected driver: docker
	I0912 21:29:06.623735   12930 start.go:901] validating driver "docker" against <nil>
	I0912 21:29:06.623812   12930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:29:06.669570   12930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-12 21:29:06.660959753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:29:06.669745   12930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:06.670205   12930 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0912 21:29:06.670374   12930 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:29:06.672490   12930 out.go:169] Using Docker driver with root privileges
	I0912 21:29:06.674081   12930 cni.go:84] Creating CNI manager for ""
	I0912 21:29:06.674114   12930 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:29:06.674127   12930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:06.674216   12930 start.go:340] cluster config:
	{Name:download-only-998393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-998393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:06.675699   12930 out.go:97] Starting "download-only-998393" primary control-plane node in "download-only-998393" cluster
	I0912 21:29:06.675724   12930 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:29:06.677043   12930 out.go:97] Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:29:06.677076   12930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:06.677200   12930 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:29:06.692977   12930 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:29:06.693112   12930 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:29:06.693128   12930 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 21:29:06.693132   12930 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 21:29:06.693140   12930 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 21:29:07.158257   12930 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:29:07.158292   12930 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:07.158465   12930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:07.160139   12930 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0912 21:29:07.160151   12930 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:29:07.759352   12930 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:29:20.956505   12930 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:29:20.956632   12930 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-5723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-998393 host does not exist
	  To start a cluster, run: "minikube start -p download-only-998393"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-998393
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.36s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-093887 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-093887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-093887
--- PASS: TestDownloadOnlyKic (1.36s)

                                                
                                    
x
+
TestBinaryMirror (1.93s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-374984 --alsologtostderr --binary-mirror http://127.0.0.1:41283 --driver=docker  --container-runtime=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-374984 --alsologtostderr --binary-mirror http://127.0.0.1:41283 --driver=docker  --container-runtime=docker: (1.598705441s)
helpers_test.go:175: Cleaning up "binary-mirror-374984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-374984
--- PASS: TestBinaryMirror (1.93s)

                                                
                                    
x
+
TestOffline (76.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-455099 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-455099 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m14.343888417s)
helpers_test.go:175: Cleaning up "offline-docker-455099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-455099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-455099: (2.617751766s)
--- PASS: TestOffline (76.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-207808
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-207808: exit status 85 (46.914577ms)

                                                
                                                
-- stdout --
	* Profile "addons-207808" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-207808"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-207808
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-207808: exit status 85 (49.278734ms)

                                                
                                                
-- stdout --
	* Profile "addons-207808" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-207808"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (210.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-207808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-207808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m30.265927247s)
--- PASS: TestAddons/Setup (210.27s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 10.826782ms
addons_test.go:905: volcano-admission stabilized in 10.887376ms
addons_test.go:897: volcano-scheduler stabilized in 10.921228ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5dcqv" [e26099c1-0a75-44c0-ba4e-223383aeba1a] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003423933s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-h7mdf" [834a9cd1-0080-4c12-8aba-952abcb70853] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003280123s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-f6jqq" [c5d546e3-1b75-41ec-82c2-db65e34a9967] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003475688s
addons_test.go:932: (dbg) Run:  kubectl --context addons-207808 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-207808 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-207808 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c3bf845c-3165-415d-8f25-ea32d5671ecb] Pending
helpers_test.go:344: "test-job-nginx-0" [c3bf845c-3165-415d-8f25-ea32d5671ecb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c3bf845c-3165-415d-8f25-ea32d5671ecb] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003805097s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable volcano --alsologtostderr -v=1: (10.325867334s)
--- PASS: TestAddons/serial/Volcano (39.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-207808 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-207808 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-207808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-207808 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-207808 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1607e91b-1ca2-4f5a-a868-4244413cf29c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1607e91b-1ca2-4f5a-a868-4244413cf29c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003538545s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-207808 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable ingress-dns --alsologtostderr -v=1: (1.530816543s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable ingress --alsologtostderr -v=1: (7.533243519s)
--- PASS: TestAddons/parallel/Ingress (21.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mmfqp" [ece0b509-e24c-4226-80a6-3cad52cccdf0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003913985s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-207808
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-207808: (5.56394665s)
--- PASS: TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.013592ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qp9pj" [467286ab-a1a8-4e01-aef7-f92c567162ba] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003912115s
addons_test.go:417: (dbg) Run:  kubectl --context addons-207808 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.772462ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-lnb7p" [7df8afba-a05c-403e-a96c-3556b198e183] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.002504353s
addons_test.go:475: (dbg) Run:  kubectl --context addons-207808 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-207808 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.343703553s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.210807ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-207808 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-207808 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3f7c493b-2991-4712-a4dd-0e2575b61e3e] Pending
helpers_test.go:344: "task-pv-pod" [3f7c493b-2991-4712-a4dd-0e2575b61e3e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3f7c493b-2991-4712-a4dd-0e2575b61e3e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003415164s
addons_test.go:590: (dbg) Run:  kubectl --context addons-207808 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-207808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-207808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-207808 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-207808 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-207808 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-207808 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [69c6e602-5a1a-4ede-93cb-ec7ab7cec6ac] Pending
helpers_test.go:344: "task-pv-pod-restore" [69c6e602-5a1a-4ede-93cb-ec7ab7cec6ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [69c6e602-5a1a-4ede-93cb-ec7ab7cec6ac] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003393723s
addons_test.go:632: (dbg) Run:  kubectl --context addons-207808 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-207808 delete pod task-pv-pod-restore: (1.025202453s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-207808 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-207808 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.536066677s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-207808 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-207808 --alsologtostderr -v=1: (1.002299369s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-bfj27" [b3c18a7b-cc0a-4278-834f-101b428dfedd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-bfj27" [b3c18a7b-cc0a-4278-834f-101b428dfedd] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003126454s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable headlamp --alsologtostderr -v=1: (5.531616626s)
--- PASS: TestAddons/parallel/Headlamp (18.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hkjdz" [2d4547c9-b8d9-4e10-b7eb-4454584bc207] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00351903s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-207808
--- PASS: TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-207808 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-207808 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [509d75d8-649b-446d-a6dd-716bd4f63a73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [509d75d8-649b-446d-a6dd-716bd4f63a73] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [509d75d8-649b-446d-a6dd-716bd4f63a73] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003603524s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-207808 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 ssh "cat /opt/local-path-provisioner/pvc-b1ba2409-c488-4cdf-b0b8-4d252d606c73_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-207808 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-207808 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.156246681s)
--- PASS: TestAddons/parallel/LocalPath (54.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mc6cs" [1c6b255b-a9a3-49d2-9fac-3dee50123ecc] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003826384s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-207808
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k54nb" [72426c94-b3a9-4537-8bbb-5c858600e536] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003508738s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-207808 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-207808 addons disable yakd --alsologtostderr -v=1: (5.62118049s)
--- PASS: TestAddons/parallel/Yakd (11.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-207808
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-207808: (10.956013258s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-207808
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-207808
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-207808
--- PASS: TestAddons/StoppedEnableDisable (11.19s)

                                                
                                    
x
+
TestCertOptions (27.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-891543 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0912 22:15:59.637766   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.644151   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.655487   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.676984   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.718365   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.800466   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:59.962695   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:00.284169   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:00.926334   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:02.208180   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:04.769521   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:07.577711   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:09.891059   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-891543 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (24.831310796s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-891543 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-891543 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-891543 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-891543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-891543
E0912 22:16:20.133314   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-891543: (2.252024301s)
--- PASS: TestCertOptions (27.74s)

                                                
                                    
x
+
TestCertExpiration (240.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-509341 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-509341 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (35.460644655s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-509341 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-509341 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (21.003119728s)
helpers_test.go:175: Cleaning up "cert-expiration-509341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-509341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-509341: (3.914557364s)
--- PASS: TestCertExpiration (240.38s)

                                                
                                    
x
+
TestDockerFlags (29.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-430911 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-430911 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.447583233s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-430911 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-430911 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-430911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-430911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-430911: (3.169262157s)
--- PASS: TestDockerFlags (29.19s)

                                                
                                    
x
+
TestForceSystemdFlag (35.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-495840 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-495840 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.456580612s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-495840 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-495840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-495840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-495840: (2.286505648s)
--- PASS: TestForceSystemdFlag (35.07s)

                                                
                                    
x
+
TestForceSystemdEnv (25.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-124175 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-124175 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (23.184926104s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-124175 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-124175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-124175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-124175: (2.116739453s)
--- PASS: TestForceSystemdEnv (25.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.80s)

                                                
                                    
x
+
TestErrorSpam/setup (24.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-390458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-390458 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-390458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-390458 --driver=docker  --container-runtime=docker: (24.173384215s)
--- PASS: TestErrorSpam/setup (24.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 pause
--- PASS: TestErrorSpam/pause (1.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 unpause
--- PASS: TestErrorSpam/unpause (1.28s)

                                                
                                    
x
+
TestErrorSpam/stop (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 stop: (1.670262003s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390458 --log_dir /tmp/nospam-390458 stop
--- PASS: TestErrorSpam/stop (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-5723/.minikube/files/etc/test/nested/copy/12518/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-896535 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m3.754103528s)
--- PASS: TestFunctional/serial/StartWithProxy (63.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-896535 --alsologtostderr -v=8: (37.056321848s)
functional_test.go:663: soft start took 37.057180777s for "functional-896535" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-896535 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-896535 /tmp/TestFunctionalserialCacheCmdcacheadd_local562188223/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache add minikube-local-cache-test:functional-896535
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-896535 cache add minikube-local-cache-test:functional-896535: (1.08322184s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache delete minikube-local-cache-test:functional-896535
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-896535
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.552835ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 kubectl -- --context functional-896535 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-896535 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-896535 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.376983259s)
functional_test.go:761: restart took 36.379353683s for "functional-896535" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-896535 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 logs --file /tmp/TestFunctionalserialLogsFileCmd3257461536/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-896535 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-896535
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-896535: exit status 115 (306.58247ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30767 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-896535 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-896535 delete -f testdata/invalidsvc.yaml: (1.180522692s)
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 config get cpus: exit status 14 (61.564575ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 config get cpus: exit status 14 (51.30865ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896535 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896535 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 66512: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896535 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.825632ms)

                                                
                                                
-- stdout --
	* [functional-896535] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:46:16.399018   65973 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:46:16.400081   65973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:46:16.400095   65973 out.go:358] Setting ErrFile to fd 2...
	I0912 21:46:16.400100   65973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:46:16.400511   65973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:46:16.401513   65973 out.go:352] Setting JSON to false
	I0912 21:46:16.403153   65973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1719,"bootTime":1726175857,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:46:16.403239   65973 start.go:139] virtualization: kvm guest
	I0912 21:46:16.405884   65973 out.go:177] * [functional-896535] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:46:16.407285   65973 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:46:16.407318   65973 notify.go:220] Checking for updates...
	I0912 21:46:16.409852   65973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:46:16.411241   65973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:46:16.412562   65973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	I0912 21:46:16.413783   65973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:46:16.415097   65973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:46:16.416791   65973 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:46:16.417405   65973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:46:16.443773   65973 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:46:16.443960   65973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:46:16.501023   65973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-12 21:46:16.49041876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:46:16.501132   65973 docker.go:318] overlay module found
	I0912 21:46:16.504452   65973 out.go:177] * Using the docker driver based on existing profile
	I0912 21:46:16.505943   65973 start.go:297] selected driver: docker
	I0912 21:46:16.505974   65973 start.go:901] validating driver "docker" against &{Name:functional-896535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-896535 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:46:16.506104   65973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:46:16.508315   65973 out.go:201] 
	W0912 21:46:16.509683   65973 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 21:46:16.511126   65973 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896535 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896535 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (143.986435ms)

                                                
                                                
-- stdout --
	* [functional-896535] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:46:11.855210   64519 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:46:11.855320   64519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:46:11.855329   64519 out.go:358] Setting ErrFile to fd 2...
	I0912 21:46:11.855333   64519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:46:11.855601   64519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:46:11.856143   64519 out.go:352] Setting JSON to false
	I0912 21:46:11.857275   64519 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1715,"bootTime":1726175857,"procs":414,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:46:11.857339   64519 start.go:139] virtualization: kvm guest
	I0912 21:46:11.859595   64519 out.go:177] * [functional-896535] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0912 21:46:11.861010   64519 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:46:11.861081   64519 notify.go:220] Checking for updates...
	I0912 21:46:11.863542   64519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:46:11.864733   64519 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	I0912 21:46:11.865976   64519 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	I0912 21:46:11.867182   64519 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:46:11.868371   64519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:46:11.869822   64519 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:46:11.870242   64519 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:46:11.894468   64519 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:46:11.894551   64519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:46:11.946037   64519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-12 21:46:11.935836525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:46:11.946154   64519 docker.go:318] overlay module found
	I0912 21:46:11.947828   64519 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0912 21:46:11.949264   64519 start.go:297] selected driver: docker
	I0912 21:46:11.949284   64519 start.go:901] validating driver "docker" against &{Name:functional-896535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-896535 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:46:11.949385   64519 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:46:11.951404   64519 out.go:201] 
	W0912 21:46:11.952681   64519 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 21:46:11.953732   64519 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-896535 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-896535 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j7k9s" [da2f9aa5-6963-44f2-a1d7-e7aa20a57c0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j7k9s" [da2f9aa5-6963-44f2-a1d7-e7aa20a57c0b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.003750047s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31721
functional_test.go:1675: http://192.168.49.2:31721: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-j7k9s

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31721
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f1b7d927-886a-4547-a296-93b4e8beac5e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004176347s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-896535 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-896535 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-896535 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-896535 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5e847d7-7dad-4adc-82cc-86f63db5f84b] Pending
helpers_test.go:344: "sp-pod" [e5e847d7-7dad-4adc-82cc-86f63db5f84b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5e847d7-7dad-4adc-82cc-86f63db5f84b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003219676s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-896535 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-896535 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-896535 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [552b5807-05a6-4a3c-9301-607241462999] Pending
helpers_test.go:344: "sp-pod" [552b5807-05a6-4a3c-9301-607241462999] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [552b5807-05a6-4a3c-9301-607241462999] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003608306s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-896535 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh -n functional-896535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cp functional-896535:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd961300958/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh -n functional-896535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh -n functional-896535 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-896535 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vxg27" [ddebec7b-7c98-41fc-8ebe-1a965ea25ebe] Pending
helpers_test.go:344: "mysql-6cdb49bbb-vxg27" [ddebec7b-7c98-41fc-8ebe-1a965ea25ebe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vxg27" [ddebec7b-7c98-41fc-8ebe-1a965ea25ebe] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00431201s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;": exit status 1 (158.496339ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;": exit status 1 (194.855442ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;": exit status 1 (167.135768ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-896535 exec mysql-6cdb49bbb-vxg27 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12518/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /etc/test/nested/copy/12518/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12518.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /etc/ssl/certs/12518.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12518.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /usr/share/ca-certificates/12518.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/125182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /etc/ssl/certs/125182.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/125182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /usr/share/ca-certificates/125182.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-896535 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh "sudo systemctl is-active crio": exit status 1 (281.618248ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896535 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-896535
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-896535
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896535 image ls --format short --alsologtostderr:
I0912 21:46:37.997060   69475 out.go:345] Setting OutFile to fd 1 ...
I0912 21:46:37.997184   69475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:37.997215   69475 out.go:358] Setting ErrFile to fd 2...
I0912 21:46:37.997227   69475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:37.997481   69475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
I0912 21:46:37.998101   69475 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:37.998256   69475 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:37.998785   69475 cli_runner.go:164] Run: docker container inspect functional-896535 --format={{.State.Status}}
I0912 21:46:38.018073   69475 ssh_runner.go:195] Run: systemctl --version
I0912 21:46:38.018167   69475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896535
I0912 21:46:38.036130   69475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/functional-896535/id_rsa Username:docker}
I0912 21:46:38.131473   69475 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 61782: os: process already finished
helpers_test.go:508: unable to kill pid 61431: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896535 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-896535 | e44e11a4dd2e3 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| docker.io/kicbase/echo-server               | functional-896535 | 9056ab77afb8e | 4.94MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896535 image ls --format table --alsologtostderr:
I0912 21:46:38.422014   69568 out.go:345] Setting OutFile to fd 1 ...
I0912 21:46:38.422110   69568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.422118   69568 out.go:358] Setting ErrFile to fd 2...
I0912 21:46:38.422121   69568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.422307   69568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
I0912 21:46:38.422814   69568 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.422914   69568 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.423319   69568 cli_runner.go:164] Run: docker container inspect functional-896535 --format={{.State.Status}}
I0912 21:46:38.443283   69568 ssh_runner.go:195] Run: systemctl --version
I0912 21:46:38.443344   69568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896535
I0912 21:46:38.462234   69568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/functional-896535/id_rsa Username:docker}
I0912 21:46:38.547095   69568 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896535 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-896535"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e44e11a4dd2e3c20177eb64b6a6281d687b7cc77a23b064fd6b800868fb17700","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-896535"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repo
Digests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896535 image ls --format json --alsologtostderr:
I0912 21:46:38.210610   69519 out.go:345] Setting OutFile to fd 1 ...
I0912 21:46:38.210728   69519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.210737   69519 out.go:358] Setting ErrFile to fd 2...
I0912 21:46:38.210743   69519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.210940   69519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
I0912 21:46:38.211537   69519 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.211653   69519 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.212071   69519 cli_runner.go:164] Run: docker container inspect functional-896535 --format={{.State.Status}}
I0912 21:46:38.228686   69519 ssh_runner.go:195] Run: systemctl --version
I0912 21:46:38.228740   69519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896535
I0912 21:46:38.249684   69519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/functional-896535/id_rsa Username:docker}
I0912 21:46:38.340105   69519 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896535 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e44e11a4dd2e3c20177eb64b6a6281d687b7cc77a23b064fd6b800868fb17700
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-896535
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-896535
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896535 image ls --format yaml --alsologtostderr:
I0912 21:46:38.622831   69646 out.go:345] Setting OutFile to fd 1 ...
I0912 21:46:38.623119   69646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.623130   69646 out.go:358] Setting ErrFile to fd 2...
I0912 21:46:38.623134   69646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:38.623350   69646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
I0912 21:46:38.623921   69646 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.624024   69646 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:38.624399   69646 cli_runner.go:164] Run: docker container inspect functional-896535 --format={{.State.Status}}
I0912 21:46:38.641662   69646 ssh_runner.go:195] Run: systemctl --version
I0912 21:46:38.641710   69646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896535
I0912 21:46:38.660349   69646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/functional-896535/id_rsa Username:docker}
I0912 21:46:38.743469   69646 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh pgrep buildkitd: exit status 1 (254.822586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image build -t localhost/my-image:functional-896535 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-896535 image build -t localhost/my-image:functional-896535 testdata/build --alsologtostderr: (3.786648944s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896535 image build -t localhost/my-image:functional-896535 testdata/build --alsologtostderr:
I0912 21:46:39.103950   69792 out.go:345] Setting OutFile to fd 1 ...
I0912 21:46:39.104100   69792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:39.104112   69792 out.go:358] Setting ErrFile to fd 2...
I0912 21:46:39.104118   69792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:46:39.104524   69792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
I0912 21:46:39.105632   69792 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:39.106341   69792 config.go:182] Loaded profile config "functional-896535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:46:39.107024   69792 cli_runner.go:164] Run: docker container inspect functional-896535 --format={{.State.Status}}
I0912 21:46:39.126368   69792 ssh_runner.go:195] Run: systemctl --version
I0912 21:46:39.126426   69792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896535
I0912 21:46:39.151108   69792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/functional-896535/id_rsa Username:docker}
I0912 21:46:39.243955   69792 build_images.go:161] Building image from path: /tmp/build.2757428609.tar
I0912 21:46:39.244022   69792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 21:46:39.255419   69792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2757428609.tar
I0912 21:46:39.259340   69792 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2757428609.tar: stat -c "%s %y" /var/lib/minikube/build/build.2757428609.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2757428609.tar': No such file or directory
I0912 21:46:39.259378   69792 ssh_runner.go:362] scp /tmp/build.2757428609.tar --> /var/lib/minikube/build/build.2757428609.tar (3072 bytes)
I0912 21:46:39.341746   69792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2757428609
I0912 21:46:39.352850   69792 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2757428609 -xf /var/lib/minikube/build/build.2757428609.tar
I0912 21:46:39.363248   69792 docker.go:360] Building image: /var/lib/minikube/build/build.2757428609
I0912 21:46:39.363326   69792 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-896535 /var/lib/minikube/build/build.2757428609
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.9s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e9a28e69c1618e060c53d045ca4b6507c54bbb8a5b8708a1e9e41716f0b75d0b done
#8 naming to localhost/my-image:functional-896535 done
#8 DONE 0.0s
I0912 21:46:42.817576   69792 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-896535 /var/lib/minikube/build/build.2757428609: (3.454222654s)
I0912 21:46:42.817645   69792 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2757428609
I0912 21:46:42.825915   69792 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2757428609.tar
I0912 21:46:42.833343   69792 build_images.go:217] Built localhost/my-image:functional-896535 from /tmp/build.2757428609.tar
I0912 21:46:42.833384   69792 build_images.go:133] succeeded building to: functional-896535
I0912 21:46:42.833391   69792 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.832654736s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-896535
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-896535 docker-env) && out/minikube-linux-amd64 status -p functional-896535"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-896535 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "325.234988ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.907087ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-896535 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8a5d559e-bf66-4a00-b255-29cf4020f840] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8a5d559e-bf66-4a00-b255-29cf4020f840] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004493565s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "305.753406ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "57.226588ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image load --daemon kicbase/echo-server:functional-896535 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image load --daemon kicbase/echo-server:functional-896535 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.060165994s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-896535
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image load --daemon kicbase/echo-server:functional-896535 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdany-port461185101/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726177571958593032" to /tmp/TestFunctionalparallelMountCmdany-port461185101/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726177571958593032" to /tmp/TestFunctionalparallelMountCmdany-port461185101/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726177571958593032" to /tmp/TestFunctionalparallelMountCmdany-port461185101/001/test-1726177571958593032
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.66855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 21:46 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 21:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 21:46 test-1726177571958593032
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh cat /mount-9p/test-1726177571958593032
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-896535 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fb2b718c-15fd-47fb-a02d-cac309bdb7a7] Pending
helpers_test.go:344: "busybox-mount" [fb2b718c-15fd-47fb-a02d-cac309bdb7a7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fb2b718c-15fd-47fb-a02d-cac309bdb7a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fb2b718c-15fd-47fb-a02d-cac309bdb7a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.003769941s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-896535 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdany-port461185101/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image save kicbase/echo-server:functional-896535 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image rm kicbase/echo-server:functional-896535 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-896535
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 image save --daemon kicbase/echo-server:functional-896535 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-896535
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-896535 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.156.200 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-896535 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdspecific-port2272233954/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.48373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdspecific-port2272233954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh "sudo umount -f /mount-9p": exit status 1 (290.006182ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-896535 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdspecific-port2272233954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T" /mount1: exit status 1 (324.108417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-896535 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896535 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1352265904/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-896535 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-896535 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4tpdc" [1cd4436a-cd9b-4687-9e97-a269d422e49d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2024/09/12 21:46:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-6b9f76b5c7-4tpdc" [1cd4436a-cd9b-4687-9e97-a269d422e49d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.003774336s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-896535 service list: (1.672608197s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-896535 service list -o json: (1.671596791s)
functional_test.go:1494: Took "1.671693444s" to run "out/minikube-linux-amd64 -p functional-896535 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32166
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-896535 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32166
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-896535
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-896535
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-896535
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792575 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 21:47:56.743775   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:56.750706   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:56.762052   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:56.783442   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:56.824871   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:56.906287   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:57.068005   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:57.389677   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:58.031706   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:59.313683   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:48:01.875774   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:48:06.997045   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:48:17.239156   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:48:37.721474   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-792575 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m41.17456767s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (101.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-792575 -- rollout status deployment/busybox: (3.605455331s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-nmklz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-shwlc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-wjscb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-nmklz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-shwlc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-wjscb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-nmklz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-shwlc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-wjscb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-nmklz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-nmklz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-shwlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-shwlc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-wjscb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792575 -- exec busybox-7dff88458-wjscb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-792575 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-792575 -v=7 --alsologtostderr: (19.492930964s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-792575 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp testdata/cp-test.txt ha-792575:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3470312858/001/cp-test_ha-792575.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575:/home/docker/cp-test.txt ha-792575-m02:/home/docker/cp-test_ha-792575_ha-792575-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test_ha-792575_ha-792575-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575:/home/docker/cp-test.txt ha-792575-m03:/home/docker/cp-test_ha-792575_ha-792575-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test_ha-792575_ha-792575-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575:/home/docker/cp-test.txt ha-792575-m04:/home/docker/cp-test_ha-792575_ha-792575-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test_ha-792575_ha-792575-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp testdata/cp-test.txt ha-792575-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3470312858/001/cp-test_ha-792575-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m02:/home/docker/cp-test.txt ha-792575:/home/docker/cp-test_ha-792575-m02_ha-792575.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test_ha-792575-m02_ha-792575.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m02:/home/docker/cp-test.txt ha-792575-m03:/home/docker/cp-test_ha-792575-m02_ha-792575-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test_ha-792575-m02_ha-792575-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m02:/home/docker/cp-test.txt ha-792575-m04:/home/docker/cp-test_ha-792575-m02_ha-792575-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test_ha-792575-m02_ha-792575-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp testdata/cp-test.txt ha-792575-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3470312858/001/cp-test_ha-792575-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m03:/home/docker/cp-test.txt ha-792575:/home/docker/cp-test_ha-792575-m03_ha-792575.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test_ha-792575-m03_ha-792575.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m03:/home/docker/cp-test.txt ha-792575-m02:/home/docker/cp-test_ha-792575-m03_ha-792575-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test_ha-792575-m03_ha-792575-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m03:/home/docker/cp-test.txt ha-792575-m04:/home/docker/cp-test_ha-792575-m03_ha-792575-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test.txt"
E0912 21:49:18.682891   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test_ha-792575-m03_ha-792575-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp testdata/cp-test.txt ha-792575-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3470312858/001/cp-test_ha-792575-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m04:/home/docker/cp-test.txt ha-792575:/home/docker/cp-test_ha-792575-m04_ha-792575.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575 "sudo cat /home/docker/cp-test_ha-792575-m04_ha-792575.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m04:/home/docker/cp-test.txt ha-792575-m02:/home/docker/cp-test_ha-792575-m04_ha-792575-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m02 "sudo cat /home/docker/cp-test_ha-792575-m04_ha-792575-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 cp ha-792575-m04:/home/docker/cp-test.txt ha-792575-m03:/home/docker/cp-test_ha-792575-m04_ha-792575-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 ssh -n ha-792575-m03 "sudo cat /home/docker/cp-test_ha-792575-m04_ha-792575-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-792575 node stop m02 -v=7 --alsologtostderr: (10.788376619s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr: exit status 7 (631.012633ms)

                                                
                                                
-- stdout --
	ha-792575
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792575-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792575-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792575-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:49:33.418932   98261 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:49:33.419159   98261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:49:33.419175   98261 out.go:358] Setting ErrFile to fd 2...
	I0912 21:49:33.419182   98261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:49:33.419353   98261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:49:33.419528   98261 out.go:352] Setting JSON to false
	I0912 21:49:33.419558   98261 mustload.go:65] Loading cluster: ha-792575
	I0912 21:49:33.419678   98261 notify.go:220] Checking for updates...
	I0912 21:49:33.420056   98261 config.go:182] Loaded profile config "ha-792575": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:49:33.420077   98261 status.go:255] checking status of ha-792575 ...
	I0912 21:49:33.420583   98261 cli_runner.go:164] Run: docker container inspect ha-792575 --format={{.State.Status}}
	I0912 21:49:33.437706   98261 status.go:330] ha-792575 host status = "Running" (err=<nil>)
	I0912 21:49:33.437736   98261 host.go:66] Checking if "ha-792575" exists ...
	I0912 21:49:33.438010   98261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792575
	I0912 21:49:33.456473   98261 host.go:66] Checking if "ha-792575" exists ...
	I0912 21:49:33.456775   98261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:49:33.456843   98261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792575
	I0912 21:49:33.477063   98261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/ha-792575/id_rsa Username:docker}
	I0912 21:49:33.560119   98261 ssh_runner.go:195] Run: systemctl --version
	I0912 21:49:33.564037   98261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:49:33.574744   98261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:49:33.622796   98261 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-12 21:49:33.613674598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:49:33.623446   98261 kubeconfig.go:125] found "ha-792575" server: "https://192.168.49.254:8443"
	I0912 21:49:33.623479   98261 api_server.go:166] Checking apiserver status ...
	I0912 21:49:33.623515   98261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:49:33.634795   98261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2417/cgroup
	I0912 21:49:33.645308   98261 api_server.go:182] apiserver freezer: "8:freezer:/docker/69ca6b76a579272ff249e7ecd3e24fb9f200c875642094c166568c35dab77272/kubepods/burstable/pod4d0b504bb8b258925cfcbc0e12cea274/d0c097842eba95f288f601ef3174f10a10cee197ccd6face6d3eaec7a912ede9"
	I0912 21:49:33.645369   98261 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/69ca6b76a579272ff249e7ecd3e24fb9f200c875642094c166568c35dab77272/kubepods/burstable/pod4d0b504bb8b258925cfcbc0e12cea274/d0c097842eba95f288f601ef3174f10a10cee197ccd6face6d3eaec7a912ede9/freezer.state
	I0912 21:49:33.653985   98261 api_server.go:204] freezer state: "THAWED"
	I0912 21:49:33.654012   98261 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 21:49:33.657718   98261 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 21:49:33.657744   98261 status.go:422] ha-792575 apiserver status = Running (err=<nil>)
	I0912 21:49:33.657753   98261 status.go:257] ha-792575 status: &{Name:ha-792575 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 21:49:33.657774   98261 status.go:255] checking status of ha-792575-m02 ...
	I0912 21:49:33.658012   98261 cli_runner.go:164] Run: docker container inspect ha-792575-m02 --format={{.State.Status}}
	I0912 21:49:33.676075   98261 status.go:330] ha-792575-m02 host status = "Stopped" (err=<nil>)
	I0912 21:49:33.676102   98261 status.go:343] host is not running, skipping remaining checks
	I0912 21:49:33.676111   98261 status.go:257] ha-792575-m02 status: &{Name:ha-792575-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 21:49:33.676129   98261 status.go:255] checking status of ha-792575-m03 ...
	I0912 21:49:33.676492   98261 cli_runner.go:164] Run: docker container inspect ha-792575-m03 --format={{.State.Status}}
	I0912 21:49:33.695059   98261 status.go:330] ha-792575-m03 host status = "Running" (err=<nil>)
	I0912 21:49:33.695087   98261 host.go:66] Checking if "ha-792575-m03" exists ...
	I0912 21:49:33.695399   98261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792575-m03
	I0912 21:49:33.713627   98261 host.go:66] Checking if "ha-792575-m03" exists ...
	I0912 21:49:33.713945   98261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:49:33.713997   98261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792575-m03
	I0912 21:49:33.731923   98261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/ha-792575-m03/id_rsa Username:docker}
	I0912 21:49:33.815921   98261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:49:33.827548   98261 kubeconfig.go:125] found "ha-792575" server: "https://192.168.49.254:8443"
	I0912 21:49:33.827574   98261 api_server.go:166] Checking apiserver status ...
	I0912 21:49:33.827608   98261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:49:33.838893   98261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2258/cgroup
	I0912 21:49:33.847664   98261 api_server.go:182] apiserver freezer: "8:freezer:/docker/7fcc8dacb2bde0a1d1862607900b2a831db3cd7a5723e5ac54e5a2f05f7ddd16/kubepods/burstable/pod603f3e093236988955c2102491b4e827/7facfb7a688905b5c0c183d7f79ba75aa55c02ce5c65b327e7f31c51d5263e2d"
	I0912 21:49:33.847741   98261 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7fcc8dacb2bde0a1d1862607900b2a831db3cd7a5723e5ac54e5a2f05f7ddd16/kubepods/burstable/pod603f3e093236988955c2102491b4e827/7facfb7a688905b5c0c183d7f79ba75aa55c02ce5c65b327e7f31c51d5263e2d/freezer.state
	I0912 21:49:33.855589   98261 api_server.go:204] freezer state: "THAWED"
	I0912 21:49:33.855632   98261 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 21:49:33.859178   98261 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 21:49:33.859203   98261 status.go:422] ha-792575-m03 apiserver status = Running (err=<nil>)
	I0912 21:49:33.859211   98261 status.go:257] ha-792575-m03 status: &{Name:ha-792575-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 21:49:33.859225   98261 status.go:255] checking status of ha-792575-m04 ...
	I0912 21:49:33.859494   98261 cli_runner.go:164] Run: docker container inspect ha-792575-m04 --format={{.State.Status}}
	I0912 21:49:33.877100   98261 status.go:330] ha-792575-m04 host status = "Running" (err=<nil>)
	I0912 21:49:33.877121   98261 host.go:66] Checking if "ha-792575-m04" exists ...
	I0912 21:49:33.877336   98261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792575-m04
	I0912 21:49:33.895892   98261 host.go:66] Checking if "ha-792575-m04" exists ...
	I0912 21:49:33.896166   98261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:49:33.896209   98261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792575-m04
	I0912 21:49:33.913829   98261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/ha-792575-m04/id_rsa Username:docker}
	I0912 21:49:33.995741   98261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:49:34.006101   98261 status.go:257] ha-792575-m04 status: &{Name:ha-792575-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-792575 node start m02 -v=7 --alsologtostderr: (35.459207313s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-792575 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-792575 -v=7 --alsologtostderr
E0912 21:50:40.605220   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-792575 -v=7 --alsologtostderr: (33.767861281s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792575 --wait=true -v=7 --alsologtostderr
E0912 21:51:07.577139   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.583526   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.594889   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.616284   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.657729   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.739189   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:07.900716   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:08.222650   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:08.864173   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:10.146074   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:12.708052   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:17.829929   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:28.071884   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:51:48.553710   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-792575 --wait=true -v=7 --alsologtostderr: (1m44.197930685s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-792575
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
E0912 21:52:29.516361   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-792575 node delete m03 -v=7 --alsologtostderr: (8.58418049s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 stop -v=7 --alsologtostderr
E0912 21:52:56.743737   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-792575 stop -v=7 --alsologtostderr: (32.320356407s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr: exit status 7 (96.222979ms)

                                                
                                                
-- stdout --
	ha-792575
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792575-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792575-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:53:11.618288  126862 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:53:11.618409  126862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:53:11.618419  126862 out.go:358] Setting ErrFile to fd 2...
	I0912 21:53:11.618426  126862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:53:11.618650  126862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 21:53:11.618827  126862 out.go:352] Setting JSON to false
	I0912 21:53:11.618854  126862 mustload.go:65] Loading cluster: ha-792575
	I0912 21:53:11.618947  126862 notify.go:220] Checking for updates...
	I0912 21:53:11.619264  126862 config.go:182] Loaded profile config "ha-792575": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:53:11.619282  126862 status.go:255] checking status of ha-792575 ...
	I0912 21:53:11.619669  126862 cli_runner.go:164] Run: docker container inspect ha-792575 --format={{.State.Status}}
	I0912 21:53:11.636893  126862 status.go:330] ha-792575 host status = "Stopped" (err=<nil>)
	I0912 21:53:11.636915  126862 status.go:343] host is not running, skipping remaining checks
	I0912 21:53:11.636922  126862 status.go:257] ha-792575 status: &{Name:ha-792575 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 21:53:11.636962  126862 status.go:255] checking status of ha-792575-m02 ...
	I0912 21:53:11.637201  126862 cli_runner.go:164] Run: docker container inspect ha-792575-m02 --format={{.State.Status}}
	I0912 21:53:11.654732  126862 status.go:330] ha-792575-m02 host status = "Stopped" (err=<nil>)
	I0912 21:53:11.655111  126862 status.go:343] host is not running, skipping remaining checks
	I0912 21:53:11.655119  126862 status.go:257] ha-792575-m02 status: &{Name:ha-792575-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 21:53:11.655155  126862 status.go:255] checking status of ha-792575-m04 ...
	I0912 21:53:11.655415  126862 cli_runner.go:164] Run: docker container inspect ha-792575-m04 --format={{.State.Status}}
	I0912 21:53:11.672854  126862 status.go:330] ha-792575-m04 host status = "Stopped" (err=<nil>)
	I0912 21:53:11.672876  126862 status.go:343] host is not running, skipping remaining checks
	I0912 21:53:11.672882  126862 status.go:257] ha-792575-m04 status: &{Name:ha-792575-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792575 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 21:53:24.446570   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:53:51.438154   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-792575 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.959156625s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-792575 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-792575 --control-plane -v=7 --alsologtostderr: (36.521503449s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-792575 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-619054 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-619054 --driver=docker  --container-runtime=docker: (24.405783219s)
--- PASS: TestImageBuild/serial/Setup (24.41s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-619054
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-619054: (2.586802226s)
--- PASS: TestImageBuild/serial/NormalBuild (2.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-619054
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-619054
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-619054
E0912 21:56:07.577015   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (33.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-802388 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0912 21:56:35.280145   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-802388 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (33.739142764s)
--- PASS: TestJSONOutput/start/Command (33.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-802388 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-802388 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-802388 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-802388 --output=json --user=testUser: (5.75630288s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-149060 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-149060 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.208603ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4978b09f-9652-4cd1-b0b5-6eb7ae0878d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-149060] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"39fd6904-6d8a-4be7-9b00-62928cd15bff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"3e80c051-de1c-4b6d-a023-ffc20fcc6274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fb82486f-9794-4cbc-a6fd-c0471fd4fb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig"}}
	{"specversion":"1.0","id":"5d918998-2ac8-4fae-9a5c-119169606a0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube"}}
	{"specversion":"1.0","id":"67b82861-0421-4a26-b867-b3d908041ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b3618423-82b4-4495-bc35-5d1ae42d2747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c241b434-74ed-43e8-8e05-12d1f053c825","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-149060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-149060
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-342273 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-342273 --network=: (24.180166943s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-342273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-342273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-342273: (2.023845351s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-604403 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-604403 --network=bridge: (23.550102279s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-604403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-604403
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-604403: (1.888321468s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.46s)

                                                
                                    
x
+
TestKicExistingNetwork (22.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-336375 --network=existing-network
E0912 21:57:56.744636   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-336375 --network=existing-network: (20.608583813s)
helpers_test.go:175: Cleaning up "existing-network-336375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-336375
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-336375: (1.865577661s)
--- PASS: TestKicExistingNetwork (22.62s)

                                                
                                    
x
+
TestKicCustomSubnet (22.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-014603 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-014603 --subnet=192.168.60.0/24: (20.263335882s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-014603 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-014603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-014603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-014603: (2.011875126s)
--- PASS: TestKicCustomSubnet (22.29s)

                                                
                                    
x
+
TestKicStaticIP (22.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-706913 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-706913 --static-ip=192.168.200.200: (20.661764889s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-706913 ip
helpers_test.go:175: Cleaning up "static-ip-706913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-706913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-706913: (2.001659032s)
--- PASS: TestKicStaticIP (22.78s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-925597 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-925597 --driver=docker  --container-runtime=docker: (20.651610168s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-928473 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-928473 --driver=docker  --container-runtime=docker: (24.210153183s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-925597
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-928473
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-928473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-928473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-928473: (2.024340956s)
helpers_test.go:175: Cleaning up "first-925597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-925597
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-925597: (1.997718196s)
--- PASS: TestMinikubeProfile (49.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-279492 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-279492 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.354246623s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-279492 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-290620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-290620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.607251775s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-290620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-279492 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-279492 --alsologtostderr -v=5: (1.46563392s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-290620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-290620
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-290620: (1.167257011s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-290620
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-290620: (7.929902222s)
--- PASS: TestMountStart/serial/RestartStopped (8.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-290620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-333397 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 22:01:07.577083   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-333397 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m9.524032668s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-333397 -- rollout status deployment/busybox: (2.922919824s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-6c5jn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-8js9b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-6c5jn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-8js9b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-6c5jn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-8js9b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-6c5jn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-6c5jn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-8js9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-333397 -- exec busybox-7dff88458-8js9b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-333397 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-333397 -v 3 --alsologtostderr: (15.72366425s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-333397 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp testdata/cp-test.txt multinode-333397:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2418624923/001/cp-test_multinode-333397.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397:/home/docker/cp-test.txt multinode-333397-m02:/home/docker/cp-test_multinode-333397_multinode-333397-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test_multinode-333397_multinode-333397-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397:/home/docker/cp-test.txt multinode-333397-m03:/home/docker/cp-test_multinode-333397_multinode-333397-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test_multinode-333397_multinode-333397-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp testdata/cp-test.txt multinode-333397-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2418624923/001/cp-test_multinode-333397-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m02:/home/docker/cp-test.txt multinode-333397:/home/docker/cp-test_multinode-333397-m02_multinode-333397.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test_multinode-333397-m02_multinode-333397.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m02:/home/docker/cp-test.txt multinode-333397-m03:/home/docker/cp-test_multinode-333397-m02_multinode-333397-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test_multinode-333397-m02_multinode-333397-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp testdata/cp-test.txt multinode-333397-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2418624923/001/cp-test_multinode-333397-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m03:/home/docker/cp-test.txt multinode-333397:/home/docker/cp-test_multinode-333397-m03_multinode-333397.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397 "sudo cat /home/docker/cp-test_multinode-333397-m03_multinode-333397.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 cp multinode-333397-m03:/home/docker/cp-test.txt multinode-333397-m02:/home/docker/cp-test_multinode-333397-m03_multinode-333397-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 ssh -n multinode-333397-m02 "sudo cat /home/docker/cp-test_multinode-333397-m03_multinode-333397-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-333397 node stop m03: (1.164549353s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-333397 status: exit status 7 (429.832764ms)

                                                
                                                
-- stdout --
	multinode-333397
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-333397-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-333397-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr: exit status 7 (445.976926ms)

                                                
                                                
-- stdout --
	multinode-333397
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-333397-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-333397-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:27.194670  213510 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:02:27.194796  213510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:27.194808  213510 out.go:358] Setting ErrFile to fd 2...
	I0912 22:02:27.194815  213510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:27.195081  213510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 22:02:27.195303  213510 out.go:352] Setting JSON to false
	I0912 22:02:27.195333  213510 mustload.go:65] Loading cluster: multinode-333397
	I0912 22:02:27.195375  213510 notify.go:220] Checking for updates...
	I0912 22:02:27.195825  213510 config.go:182] Loaded profile config "multinode-333397": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:02:27.195845  213510 status.go:255] checking status of multinode-333397 ...
	I0912 22:02:27.196371  213510 cli_runner.go:164] Run: docker container inspect multinode-333397 --format={{.State.Status}}
	I0912 22:02:27.216516  213510 status.go:330] multinode-333397 host status = "Running" (err=<nil>)
	I0912 22:02:27.216560  213510 host.go:66] Checking if "multinode-333397" exists ...
	I0912 22:02:27.216846  213510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-333397
	I0912 22:02:27.233579  213510 host.go:66] Checking if "multinode-333397" exists ...
	I0912 22:02:27.233835  213510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:27.233888  213510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-333397
	I0912 22:02:27.249961  213510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/multinode-333397/id_rsa Username:docker}
	I0912 22:02:27.331976  213510 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:27.335709  213510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:27.345616  213510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:02:27.395327  213510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-12 22:02:27.38589867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:02:27.395902  213510 kubeconfig.go:125] found "multinode-333397" server: "https://192.168.67.2:8443"
	I0912 22:02:27.395931  213510 api_server.go:166] Checking apiserver status ...
	I0912 22:02:27.395961  213510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:27.407162  213510 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2400/cgroup
	I0912 22:02:27.415984  213510 api_server.go:182] apiserver freezer: "8:freezer:/docker/d7e0a228254f7e3dc013f6bc81316f097396879d8e2d7883bd6ec3480234b59f/kubepods/burstable/podc03621484775de3ba2319fe562345c4d/f5a43b805d6f7b22b22bfb084d9ac305fef4f30ce070f31e19e2bab9fb7acb17"
	I0912 22:02:27.416044  213510 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d7e0a228254f7e3dc013f6bc81316f097396879d8e2d7883bd6ec3480234b59f/kubepods/burstable/podc03621484775de3ba2319fe562345c4d/f5a43b805d6f7b22b22bfb084d9ac305fef4f30ce070f31e19e2bab9fb7acb17/freezer.state
	I0912 22:02:27.423800  213510 api_server.go:204] freezer state: "THAWED"
	I0912 22:02:27.423831  213510 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0912 22:02:27.427391  213510 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0912 22:02:27.427416  213510 status.go:422] multinode-333397 apiserver status = Running (err=<nil>)
	I0912 22:02:27.427430  213510 status.go:257] multinode-333397 status: &{Name:multinode-333397 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:27.427447  213510 status.go:255] checking status of multinode-333397-m02 ...
	I0912 22:02:27.427777  213510 cli_runner.go:164] Run: docker container inspect multinode-333397-m02 --format={{.State.Status}}
	I0912 22:02:27.444656  213510 status.go:330] multinode-333397-m02 host status = "Running" (err=<nil>)
	I0912 22:02:27.444685  213510 host.go:66] Checking if "multinode-333397-m02" exists ...
	I0912 22:02:27.444944  213510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-333397-m02
	I0912 22:02:27.463259  213510 host.go:66] Checking if "multinode-333397-m02" exists ...
	I0912 22:02:27.463581  213510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:27.463626  213510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-333397-m02
	I0912 22:02:27.481238  213510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19616-5723/.minikube/machines/multinode-333397-m02/id_rsa Username:docker}
	I0912 22:02:27.568082  213510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:27.578649  213510 status.go:257] multinode-333397-m02 status: &{Name:multinode-333397-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:27.578693  213510 status.go:255] checking status of multinode-333397-m03 ...
	I0912 22:02:27.579026  213510 cli_runner.go:164] Run: docker container inspect multinode-333397-m03 --format={{.State.Status}}
	I0912 22:02:27.596635  213510 status.go:330] multinode-333397-m03 host status = "Stopped" (err=<nil>)
	I0912 22:02:27.596664  213510 status.go:343] host is not running, skipping remaining checks
	I0912 22:02:27.596681  213510 status.go:257] multinode-333397-m03 status: &{Name:multinode-333397-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-333397 node start m03 -v=7 --alsologtostderr: (8.915620555s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-333397
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-333397
E0912 22:02:56.743919   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-333397: (22.313775121s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-333397 --wait=true -v=8 --alsologtostderr
E0912 22:04:19.808781   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-333397 --wait=true -v=8 --alsologtostderr: (1m27.410125605s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-333397
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-333397 node delete m03: (4.629135247s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-333397 stop: (21.32997585s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-333397 status: exit status 7 (77.531968ms)

                                                
                                                
-- stdout --
	multinode-333397
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-333397-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr: exit status 7 (79.686029ms)

                                                
                                                
-- stdout --
	multinode-333397
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-333397-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:04:53.580698  229088 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:04:53.580809  229088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:04:53.580818  229088 out.go:358] Setting ErrFile to fd 2...
	I0912 22:04:53.580822  229088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:04:53.581013  229088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5723/.minikube/bin
	I0912 22:04:53.581166  229088 out.go:352] Setting JSON to false
	I0912 22:04:53.581190  229088 mustload.go:65] Loading cluster: multinode-333397
	I0912 22:04:53.581240  229088 notify.go:220] Checking for updates...
	I0912 22:04:53.581702  229088 config.go:182] Loaded profile config "multinode-333397": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:04:53.581727  229088 status.go:255] checking status of multinode-333397 ...
	I0912 22:04:53.582194  229088 cli_runner.go:164] Run: docker container inspect multinode-333397 --format={{.State.Status}}
	I0912 22:04:53.599929  229088 status.go:330] multinode-333397 host status = "Stopped" (err=<nil>)
	I0912 22:04:53.599953  229088 status.go:343] host is not running, skipping remaining checks
	I0912 22:04:53.599962  229088 status.go:257] multinode-333397 status: &{Name:multinode-333397 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:04:53.600013  229088 status.go:255] checking status of multinode-333397-m02 ...
	I0912 22:04:53.600252  229088 cli_runner.go:164] Run: docker container inspect multinode-333397-m02 --format={{.State.Status}}
	I0912 22:04:53.617029  229088 status.go:330] multinode-333397-m02 host status = "Stopped" (err=<nil>)
	I0912 22:04:53.617050  229088 status.go:343] host is not running, skipping remaining checks
	I0912 22:04:53.617055  229088 status.go:257] multinode-333397-m02 status: &{Name:multinode-333397-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-333397 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-333397 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.735497865s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-333397 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-333397
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-333397-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-333397-m02 --driver=docker  --container-runtime=docker: exit status 14 (62.031924ms)

                                                
                                                
-- stdout --
	* [multinode-333397-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-333397-m02' is duplicated with machine name 'multinode-333397-m02' in profile 'multinode-333397'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-333397-m03 --driver=docker  --container-runtime=docker
E0912 22:06:07.577858   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-333397-m03 --driver=docker  --container-runtime=docker: (19.921435634s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-333397
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-333397: exit status 80 (250.47253ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-333397 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-333397-m03 already exists in multinode-333397-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-333397-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-333397-m03: (2.00797099s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.29s)

                                                
                                    
x
+
TestPreload (98.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-066975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-066975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (52.233171291s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-066975 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-066975 image pull gcr.io/k8s-minikube/busybox: (1.800587049s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-066975
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-066975: (10.751489664s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-066975 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0912 22:07:30.643214   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-066975 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (31.319430432s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-066975 image list
helpers_test.go:175: Cleaning up "test-preload-066975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-066975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-066975: (2.130724913s)
--- PASS: TestPreload (98.44s)

                                                
                                    
x
+
TestScheduledStopUnix (97.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-224132 --memory=2048 --driver=docker  --container-runtime=docker
E0912 22:07:56.743655   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-224132 --memory=2048 --driver=docker  --container-runtime=docker: (24.283111399s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224132 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-224132 -n scheduled-stop-224132
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224132 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224132 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224132 -n scheduled-stop-224132
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-224132
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224132 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-224132
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-224132: exit status 7 (60.948284ms)

                                                
                                                
-- stdout --
	scheduled-stop-224132
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224132 -n scheduled-stop-224132
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224132 -n scheduled-stop-224132: exit status 7 (59.162102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-224132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-224132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-224132: (1.594059177s)
--- PASS: TestScheduledStopUnix (97.08s)

                                                
                                    
x
+
TestSkaffold (102.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2674544436 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-361990 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-361990 --memory=2600 --driver=docker  --container-runtime=docker: (21.330221027s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2674544436 run --minikube-profile skaffold-361990 --kube-context skaffold-361990 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2674544436 run --minikube-profile skaffold-361990 --kube-context skaffold-361990 --status-check=true --port-forward=false --interactive=false: (1m4.954025641s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7b584b4675-7d4t6" [21872fed-f4c6-4d1b-923c-d45c6b91bd99] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003732083s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-59c4578c6f-f87cs" [21817877-88b4-4719-b2ad-caeee2b3124e] Running
E0912 22:11:07.577627   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00297257s
helpers_test.go:175: Cleaning up "skaffold-361990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-361990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-361990: (2.714323998s)
--- PASS: TestSkaffold (102.86s)

                                                
                                    
x
+
TestInsufficientStorage (9.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-619064 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-619064 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.419196264s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"111be7c6-3f77-4029-9dbf-3b7d182f03ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-619064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d462635-9a0e-4cc0-af4f-914621c33dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"fe571feb-876d-4038-b662-2428b4ef528f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"60d51154-6dea-49df-86a1-61ced3b92a14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig"}}
	{"specversion":"1.0","id":"adf5d442-a5ea-4178-b782-5eb3f5e1ec05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube"}}
	{"specversion":"1.0","id":"110545e7-e721-40c2-9674-4d8c59185376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b17e078d-211f-44d4-8662-008a7d296b1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07219b51-952c-44fd-9f2f-2a4c7b3b6f3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2dadb92b-91b8-4a80-87b7-bf3f4887cf85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ae86b24b-76f2-4cef-b8d8-05dbcf637a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"93e01b66-95e6-41cc-8e3f-5eb573eb2432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2ad06f45-843e-4a97-88c2-210a85418c29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-619064\" primary control-plane node in \"insufficient-storage-619064\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"61a3e599-03f5-4ce8-a0da-cfc851172588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726156396-19616 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2da5f29d-0d28-4eda-a955-4c7d2d79ea17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9397dc3b-c38d-4991-b51b-234a87ca49ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-619064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-619064 --output=json --layout=cluster: exit status 7 (241.282009ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-619064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-619064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:11:21.018954  268882 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-619064" does not appear in /home/jenkins/minikube-integration/19616-5723/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-619064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-619064 --output=json --layout=cluster: exit status 7 (245.703069ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-619064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-619064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:11:21.265775  268984 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-619064" does not appear in /home/jenkins/minikube-integration/19616-5723/kubeconfig
	E0912 22:11:21.275331  268984 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/insufficient-storage-619064/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-619064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-619064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-619064: (1.616445011s)
--- PASS: TestInsufficientStorage (9.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.695875307 start -p running-upgrade-777823 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.695875307 start -p running-upgrade-777823 --memory=2200 --vm-driver=docker  --container-runtime=docker: (41.49993244s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-777823 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-777823 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.13101703s)
helpers_test.go:175: Cleaning up "running-upgrade-777823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-777823
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-777823: (2.095297008s)
--- PASS: TestRunningBinaryUpgrade (75.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (335.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.804837238s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-662986
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-662986: (1.619480091s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-662986 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-662986 status --format={{.Host}}: exit status 7 (61.185815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m32.308649119s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-662986 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (69.438038ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-662986] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-662986
	    minikube start -p kubernetes-upgrade-662986 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6629862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-662986 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-662986 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.872806317s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-662986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-662986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-662986: (2.499080213s)
--- PASS: TestKubernetesUpgrade (335.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2476296120 start -p missing-upgrade-094795 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2476296120 start -p missing-upgrade-094795 --memory=2200 --driver=docker  --container-runtime=docker: (2m0.514970686s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-094795
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-094795: (10.463276953s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-094795
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-094795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-094795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.166684945s)
helpers_test.go:175: Cleaning up "missing-upgrade-094795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-094795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-094795: (2.20826387s)
--- PASS: TestMissingContainerUpgrade (185.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (85.348672ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-477052] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477052 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477052 --driver=docker  --container-runtime=docker: (34.170401328s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-477052 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --driver=docker  --container-runtime=docker: (15.568714923s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-477052 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-477052 status -o json: exit status 2 (263.403147ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-477052","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-477052
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-477052: (1.712290331s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477052 --no-kubernetes --driver=docker  --container-runtime=docker: (6.553803898s)
--- PASS: TestNoKubernetes/serial/Start (6.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-477052 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-477052 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.668464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-477052
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-477052: (1.171480592s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477052 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477052 --driver=docker  --container-runtime=docker: (7.564947796s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-477052 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-477052 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.252291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1069514827 start -p stopped-upgrade-417001 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0912 22:12:56.743908   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1069514827 start -p stopped-upgrade-417001 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m4.951009374s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1069514827 -p stopped-upgrade-417001 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1069514827 -p stopped-upgrade-417001 stop: (10.698972315s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-417001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-417001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.316128644s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-417001
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-417001: (1.00518429s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestPause/serial/Start (70.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-707332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-707332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m10.588147395s)
--- PASS: TestPause/serial/Start (70.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-903670 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-903670 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.524681789s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-707332 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-707332 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.092182068s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-742238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:16:40.615389   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-742238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m10.028229644s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-707332 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-707332 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-707332 --output=json --layout=cluster: exit status 2 (346.905802ms)

                                                
                                                
-- stdout --
	{"Name":"pause-707332","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-707332","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-707332 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-707332 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-707332 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-707332 --alsologtostderr -v=5: (2.190859457s)
--- PASS: TestPause/serial/DeletePaused (2.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-707332
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-707332: exit status 1 (16.821204ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-707332: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-427272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:17:21.576945   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-427272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m5.019685291s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742238 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [968acb1b-0714-4238-8a6a-d37aa5434286] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [968acb1b-0714-4238-8a6a-d37aa5434286] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004474748s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742238 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-742238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-742238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-742238 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-742238 --alsologtostderr -v=3: (10.73309115s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742238 -n no-preload-742238
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742238 -n no-preload-742238: exit status 7 (108.577664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-742238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-742238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:17:56.744443   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-742238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.975152567s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742238 -n no-preload-742238
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-427272 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [082f2edd-d17d-4450-8b10-091985a09202] Pending
helpers_test.go:344: "busybox" [082f2edd-d17d-4450-8b10-091985a09202] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [082f2edd-d17d-4450-8b10-091985a09202] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00413087s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-427272 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-427272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-427272 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-447273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-447273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.548631067s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-427272 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-427272 --alsologtostderr -v=3: (10.734026349s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427272 -n embed-certs-427272
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427272 -n embed-certs-427272: exit status 7 (112.887028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-427272 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-427272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-427272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.517715946s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427272 -n embed-certs-427272
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-903670 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b77a0f9-a445-46ea-8e36-9603bdc4a642] Pending
helpers_test.go:344: "busybox" [2b77a0f9-a445-46ea-8e36-9603bdc4a642] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2b77a0f9-a445-46ea-8e36-9603bdc4a642] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004371347s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-903670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-903670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-903670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-903670 --alsologtostderr -v=3
E0912 22:18:43.499085   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-903670 --alsologtostderr -v=3: (10.919208512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-903670 -n old-k8s-version-903670
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-903670 -n old-k8s-version-903670: exit status 7 (63.474204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-903670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (23.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-903670 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-903670 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (23.373568713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-903670 -n old-k8s-version-903670
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (23.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w2s7b" [306b0b55-1dc4-41ad-941d-0c3de53ca128] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w2s7b" [306b0b55-1dc4-41ad-941d-0c3de53ca128] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.00404805s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-447273 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [33177c97-3304-429e-8083-35567465a681] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [33177c97-3304-429e-8083-35567465a681] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004450989s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-447273 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-447273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-447273 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-447273 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-447273 --alsologtostderr -v=3: (10.861766385s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w2s7b" [306b0b55-1dc4-41ad-941d-0c3de53ca128] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004047473s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-903670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273: exit status 7 (89.495501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-447273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-447273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-447273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.891135635s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-903670 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-903670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-903670 -n old-k8s-version-903670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-903670 -n old-k8s-version-903670: exit status 2 (292.371695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-903670 -n old-k8s-version-903670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-903670 -n old-k8s-version-903670: exit status 2 (285.610719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-903670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-903670 -n old-k8s-version-903670
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-903670 -n old-k8s-version-903670
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-016741 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-016741 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (31.885434491s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-016741 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-016741 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004081204s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-016741 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-016741 --alsologtostderr -v=3: (10.740721161s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-016741 -n newest-cni-016741
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-016741 -n newest-cni-016741: exit status 7 (121.067896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-016741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-016741 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-016741 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.496277342s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-016741 -n newest-cni-016741
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-016741 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-016741 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-016741 -n newest-cni-016741
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-016741 -n newest-cni-016741: exit status 2 (294.517095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-016741 -n newest-cni-016741
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-016741 -n newest-cni-016741: exit status 2 (304.916741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-016741 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-016741 -n newest-cni-016741
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-016741 -n newest-cni-016741
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0912 22:20:59.638019   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:20:59.810304   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:21:07.577696   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:21:27.340776   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/skaffold-361990/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m8.621579045s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nb8l7" [4e86e255-ef8d-4fba-81c3-4f226af9af25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nb8l7" [4e86e255-ef8d-4fba-81c3-4f226af9af25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003981252s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tpdjz" [3aca8235-c13f-4e13-9718-ff472169c442] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003363275s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tpdjz" [3aca8235-c13f-4e13-9718-ff472169c442] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004099195s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-742238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m0.662874557s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-742238 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-742238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742238 -n no-preload-742238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742238 -n no-preload-742238: exit status 2 (302.16986ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742238 -n no-preload-742238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742238 -n no-preload-742238: exit status 2 (301.84558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-742238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742238 -n no-preload-742238
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742238 -n no-preload-742238
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (58.240255182s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ms56n" [b2ff0756-9bae-4f5c-a625-dc57f6c15bfc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004994929s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ms56n" [b2ff0756-9bae-4f5c-a625-dc57f6c15bfc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004890282s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-427272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-427272 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-427272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427272 -n embed-certs-427272
E0912 22:22:56.744676   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/addons-207808/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427272 -n embed-certs-427272: exit status 2 (373.194697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427272 -n embed-certs-427272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427272 -n embed-certs-427272: exit status 2 (339.469924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-427272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427272 -n embed-certs-427272
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427272 -n embed-certs-427272
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0912 22:23:24.732820   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:24.739196   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:24.750599   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:24.772034   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:24.813452   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:24.894876   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:25.056779   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:25.378269   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:26.019708   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (47.30675168s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hjnkv" [c0b0ae5d-a89a-4271-9592-92b23f9fa449] Running
E0912 22:23:27.301772   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:23:29.863544   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003597471s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k646n" [3e0da67e-017c-4736-b417-d71f6ad1b5aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004598004s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kt6jh" [136e6436-0fe9-4a1b-a936-d452ee2482e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 22:23:34.985481   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kt6jh" [136e6436-0fe9-4a1b-a936-d452ee2482e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004524464s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mcqrm" [54dd1b47-85df-4ffe-9da0-5a0313e102b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mcqrm" [54dd1b47-85df-4ffe-9da0-5a0313e102b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004243868s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cj7pk" [0bb7a468-5293-44fd-8d22-d6901f70698d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cj7pk" [0bb7a468-5293-44fd-8d22-d6901f70698d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005024709s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (66.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m6.519452148s)
--- PASS: TestNetworkPlugins/group/false/Start (66.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zmw8c" [6b461867-a1b5-4823-9b3e-6641c9d21752] Running
E0912 22:24:05.709033   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004533897s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zmw8c" [6b461867-a1b5-4823-9b3e-6641c9d21752] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004528275s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-447273 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0912 22:24:10.647084   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/functional-896535/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m4.774724521s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-447273 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-447273 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273: exit status 2 (324.027698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273: exit status 2 (303.839024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-447273 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-447273 -n default-k8s-diff-port-447273
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (46.973487613s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0912 22:24:46.670697   12518 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/old-k8s-version-903670/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (41.560569583s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gjh2r" [d44b61d7-408a-4d21-9d96-4c52d2fde3e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gjh2r" [d44b61d7-408a-4d21-9d96-4c52d2fde3e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004665742s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r6c7g" [fce35fc0-198e-4700-8754-89cd1f615931] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003782736s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n9hzn" [43211881-feb6-4aea-a021-264ecca96ff1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n9hzn" [43211881-feb6-4aea-a021-264ecca96ff1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003413816s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jrvc7" [454b8356-85cd-4efe-87ca-b0c64379166e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jrvc7" [454b8356-85cd-4efe-87ca-b0c64379166e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005051358s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c4wmq" [2dfb66ef-e820-4e4a-88f6-a7f657c243d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c4wmq" [2dfb66ef-e820-4e4a-88f6-a7f657c243d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004780633s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-889068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (40.708848239s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-889068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-889068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-298vj" [2aeeca4c-26e5-4073-a1b8-644da923ae0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-298vj" [2aeeca4c-26e5-4073-a1b8-644da923ae0d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003876743s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-889068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-889068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-363264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-363264
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-889068 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-889068" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 22:13:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-662986
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-5723/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 22:15:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-707332
contexts:
- context:
cluster: kubernetes-upgrade-662986
user: kubernetes-upgrade-662986
name: kubernetes-upgrade-662986
- context:
cluster: pause-707332
extensions:
- extension:
last-update: Thu, 12 Sep 2024 22:15:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-707332
name: pause-707332
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-662986
user:
client-certificate: /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/kubernetes-upgrade-662986/client.crt
client-key: /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/kubernetes-upgrade-662986/client.key
- name: pause-707332
user:
client-certificate: /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/pause-707332/client.crt
client-key: /home/jenkins/minikube-integration/19616-5723/.minikube/profiles/pause-707332/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-889068

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-889068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889068"

                                                
                                                
----------------------- debugLogs end: cilium-889068 [took: 3.425170971s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-889068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-889068
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
Copied to clipboard