Test Report: Docker_Linux 19648

                    
                      584241d6059a856bd6609ebe9456581adc627cea:2024-09-17:36253
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.4
x
+
TestAddons/parallel/Registry (73.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.868167ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-t5sv4" [2f41b6f7-f293-467f-8215-b24af50ec8ba] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002866149s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z9ss9" [29edb9a3-341b-486a-8045-5546e8911d8c] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002559219s
addons_test.go:342: (dbg) Run:  kubectl --context addons-118348 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-118348 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-118348 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.070828696s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-118348 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 ip
2024/09/17 08:51:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-118348
helpers_test.go:235: (dbg) docker inspect addons-118348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20",
	        "Created": "2024-09-17T08:38:35.768048295Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:38:35.895667879Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20/hostname",
	        "HostsPath": "/var/lib/docker/containers/446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20/hosts",
	        "LogPath": "/var/lib/docker/containers/446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20/446a733f1f3a5dd9a1b27181ad91f934e43da3a3c6e2c831d7491d51c849bb20-json.log",
	        "Name": "/addons-118348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-118348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-118348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/11013b4f49cb49fefe3bf53f86efce26167ad1c783a7d916f52cc097b212e611-init/diff:/var/lib/docker/overlay2/7da256a43b3639c4f92f439ecfea8165b0571eba2633ca08d3d0447ef408406e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11013b4f49cb49fefe3bf53f86efce26167ad1c783a7d916f52cc097b212e611/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11013b4f49cb49fefe3bf53f86efce26167ad1c783a7d916f52cc097b212e611/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11013b4f49cb49fefe3bf53f86efce26167ad1c783a7d916f52cc097b212e611/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-118348",
	                "Source": "/var/lib/docker/volumes/addons-118348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-118348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-118348",
	                "name.minikube.sigs.k8s.io": "addons-118348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b41e9b99330a4fce7d668f0abcfb9cadff52dc936b35b5a72804519a3a3abdf8",
	            "SandboxKey": "/var/run/docker/netns/b41e9b99330a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-118348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "309e07f873c6e539134ea98773c9a158e538a4844e94270780ae606901223d6d",
	                    "EndpointID": "2455298c01d72a47ee361ac85b347908cd7a74c9e987885310b37ff2d8893fab",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-118348",
	                        "446a733f1f3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-118348 -n addons-118348
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-450334                                                                   | download-docker-450334 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-437165   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | binary-mirror-437165                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39083                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-437165                                                                     | binary-mirror-437165   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-118348                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-118348                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-118348 --wait=true                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:42 UTC | 17 Sep 24 08:42 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | addons-118348                                                                               |                        |         |         |                     |                     |
	| addons  | addons-118348 addons                                                                        | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-118348 ssh cat                                                                       | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | /opt/local-path-provisioner/pvc-55d397ea-86e9-4f5a-ae73-814393eaf4d2_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | -p addons-118348                                                                            |                        |         |         |                     |                     |
	| addons  | addons-118348 addons                                                                        | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-118348 ssh curl -s                                                                   | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-118348 ip                                                                            | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	| addons  | addons-118348 addons                                                                        | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | addons-118348                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | -p addons-118348                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-118348 ip                                                                            | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	| addons  | addons-118348 addons disable                                                                | addons-118348          | jenkins | v1.34.0 | 17 Sep 24 08:51 UTC | 17 Sep 24 08:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:12
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:12.306438   16153 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:12.306560   16153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:12.306575   16153 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:12.306580   16153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:12.306768   16153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 08:38:12.307427   16153 out.go:352] Setting JSON to false
	I0917 08:38:12.308259   16153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1243,"bootTime":1726561049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:12.308349   16153 start.go:139] virtualization: kvm guest
	I0917 08:38:12.310565   16153 out.go:177] * [addons-118348] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:12.312108   16153 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:38:12.312112   16153 notify.go:220] Checking for updates...
	I0917 08:38:12.314632   16153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:12.315863   16153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:38:12.317139   16153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	I0917 08:38:12.318165   16153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:38:12.319245   16153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:38:12.320401   16153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:12.342410   16153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:12.342525   16153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:12.390925   16153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:12.381794766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:12.391047   16153 docker.go:318] overlay module found
	I0917 08:38:12.392959   16153 out.go:177] * Using the docker driver based on user configuration
	I0917 08:38:12.394095   16153 start.go:297] selected driver: docker
	I0917 08:38:12.394106   16153 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:12.394117   16153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:38:12.394855   16153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:12.441823   16153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:12.432994306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:12.441969   16153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:12.442210   16153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:38:12.443921   16153 out.go:177] * Using Docker driver with root privileges
	I0917 08:38:12.445517   16153 cni.go:84] Creating CNI manager for ""
	I0917 08:38:12.445586   16153 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 08:38:12.445599   16153 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 08:38:12.445675   16153 start.go:340] cluster config:
	{Name:addons-118348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-118348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:12.447372   16153 out.go:177] * Starting "addons-118348" primary control-plane node in "addons-118348" cluster
	I0917 08:38:12.448900   16153 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 08:38:12.450200   16153 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0917 08:38:12.451247   16153 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 08:38:12.451287   16153 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0917 08:38:12.451293   16153 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-8091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 08:38:12.451311   16153 cache.go:56] Caching tarball of preloaded images
	I0917 08:38:12.451404   16153 preload.go:172] Found /home/jenkins/minikube-integration/19648-8091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 08:38:12.451416   16153 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 08:38:12.451733   16153 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/config.json ...
	I0917 08:38:12.451757   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/config.json: {Name:mk154b312e5e7a9628d069a344fd855bd4470df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:12.468389   16153 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0917 08:38:12.468512   16153 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0917 08:38:12.468528   16153 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0917 08:38:12.468533   16153 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0917 08:38:12.468543   16153 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0917 08:38:12.468550   16153 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0917 08:38:24.257565   16153 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0917 08:38:24.257614   16153 cache.go:194] Successfully downloaded all kic artifacts
	I0917 08:38:24.257655   16153 start.go:360] acquireMachinesLock for addons-118348: {Name:mk092a2f95e180c254fe3c2f3a6c594e1014ed24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 08:38:24.257751   16153 start.go:364] duration metric: took 78.173µs to acquireMachinesLock for "addons-118348"
	I0917 08:38:24.257773   16153 start.go:93] Provisioning new machine with config: &{Name:addons-118348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-118348 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 08:38:24.257850   16153 start.go:125] createHost starting for "" (driver="docker")
	I0917 08:38:24.259806   16153 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 08:38:24.260072   16153 start.go:159] libmachine.API.Create for "addons-118348" (driver="docker")
	I0917 08:38:24.260103   16153 client.go:168] LocalClient.Create starting
	I0917 08:38:24.260229   16153 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem
	I0917 08:38:24.371715   16153 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/cert.pem
	I0917 08:38:24.469429   16153 cli_runner.go:164] Run: docker network inspect addons-118348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 08:38:24.485011   16153 cli_runner.go:211] docker network inspect addons-118348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 08:38:24.485077   16153 network_create.go:284] running [docker network inspect addons-118348] to gather additional debugging logs...
	I0917 08:38:24.485097   16153 cli_runner.go:164] Run: docker network inspect addons-118348
	W0917 08:38:24.501204   16153 cli_runner.go:211] docker network inspect addons-118348 returned with exit code 1
	I0917 08:38:24.501259   16153 network_create.go:287] error running [docker network inspect addons-118348]: docker network inspect addons-118348: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-118348 not found
	I0917 08:38:24.501279   16153 network_create.go:289] output of [docker network inspect addons-118348]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-118348 not found
	
	** /stderr **
	I0917 08:38:24.501395   16153 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:24.517794   16153 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a9e8e0}
	I0917 08:38:24.517836   16153 network_create.go:124] attempt to create docker network addons-118348 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 08:38:24.517882   16153 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-118348 addons-118348
	I0917 08:38:24.579581   16153 network_create.go:108] docker network addons-118348 192.168.49.0/24 created
	I0917 08:38:24.579615   16153 kic.go:121] calculated static IP "192.168.49.2" for the "addons-118348" container
	I0917 08:38:24.579678   16153 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 08:38:24.595318   16153 cli_runner.go:164] Run: docker volume create addons-118348 --label name.minikube.sigs.k8s.io=addons-118348 --label created_by.minikube.sigs.k8s.io=true
	I0917 08:38:24.613336   16153 oci.go:103] Successfully created a docker volume addons-118348
	I0917 08:38:24.613403   16153 cli_runner.go:164] Run: docker run --rm --name addons-118348-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118348 --entrypoint /usr/bin/test -v addons-118348:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0917 08:38:31.794593   16153 cli_runner.go:217] Completed: docker run --rm --name addons-118348-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118348 --entrypoint /usr/bin/test -v addons-118348:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (7.181153302s)
	I0917 08:38:31.794624   16153 oci.go:107] Successfully prepared a docker volume addons-118348
	I0917 08:38:31.794649   16153 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 08:38:31.794672   16153 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 08:38:31.794743   16153 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-8091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-118348:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 08:38:35.705599   16153 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-8091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-118348:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.910803356s)
	I0917 08:38:35.705630   16153 kic.go:203] duration metric: took 3.91095417s to extract preloaded images to volume ...
	W0917 08:38:35.705757   16153 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 08:38:35.705848   16153 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 08:38:35.753901   16153 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-118348 --name addons-118348 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118348 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-118348 --network addons-118348 --ip 192.168.49.2 --volume addons-118348:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0917 08:38:36.078035   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Running}}
	I0917 08:38:36.095751   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:36.114738   16153 cli_runner.go:164] Run: docker exec addons-118348 stat /var/lib/dpkg/alternatives/iptables
	I0917 08:38:36.157132   16153 oci.go:144] the created container "addons-118348" has a running status.
	I0917 08:38:36.157159   16153 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa...
	I0917 08:38:36.263732   16153 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 08:38:36.283327   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:36.300181   16153 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 08:38:36.300205   16153 kic_runner.go:114] Args: [docker exec --privileged addons-118348 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 08:38:36.345923   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:36.368316   16153 machine.go:93] provisionDockerMachine start ...
	I0917 08:38:36.368412   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:36.387467   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:36.387725   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:36.387739   16153 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 08:38:36.388456   16153 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48316->127.0.0.1:32768: read: connection reset by peer
	I0917 08:38:39.515816   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-118348
	
	I0917 08:38:39.515843   16153 ubuntu.go:169] provisioning hostname "addons-118348"
	I0917 08:38:39.515895   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:39.533039   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.533222   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:39.533234   16153 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-118348 && echo "addons-118348" | sudo tee /etc/hostname
	I0917 08:38:39.674459   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-118348
	
	I0917 08:38:39.674536   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:39.691347   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.691516   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:39.691533   16153 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-118348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-118348/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-118348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 08:38:39.820463   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 08:38:39.820489   16153 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-8091/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-8091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-8091/.minikube}
	I0917 08:38:39.820518   16153 ubuntu.go:177] setting up certificates
	I0917 08:38:39.820527   16153 provision.go:84] configureAuth start
	I0917 08:38:39.820576   16153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118348
	I0917 08:38:39.837257   16153 provision.go:143] copyHostCerts
	I0917 08:38:39.837344   16153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-8091/.minikube/ca.pem (1082 bytes)
	I0917 08:38:39.837475   16153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-8091/.minikube/cert.pem (1123 bytes)
	I0917 08:38:39.837542   16153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-8091/.minikube/key.pem (1679 bytes)
	I0917 08:38:39.837597   16153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-8091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca-key.pem org=jenkins.addons-118348 san=[127.0.0.1 192.168.49.2 addons-118348 localhost minikube]
	I0917 08:38:39.937274   16153 provision.go:177] copyRemoteCerts
	I0917 08:38:39.937328   16153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 08:38:39.937383   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:39.954152   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:40.048990   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 08:38:40.070119   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 08:38:40.091179   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 08:38:40.112372   16153 provision.go:87] duration metric: took 291.834318ms to configureAuth
	I0917 08:38:40.112399   16153 ubuntu.go:193] setting minikube options for container-runtime
	I0917 08:38:40.112586   16153 config.go:182] Loaded profile config "addons-118348": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:38:40.112636   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:40.129243   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:40.129433   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:40.129449   16153 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 08:38:40.260919   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 08:38:40.260943   16153 ubuntu.go:71] root file system type: overlay
	I0917 08:38:40.261042   16153 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 08:38:40.261108   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:40.278136   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:40.278326   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:40.278414   16153 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 08:38:40.422782   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 08:38:40.422874   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:40.439499   16153 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:40.439699   16153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 08:38:40.439725   16153 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 08:38:41.117254   16153 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-17 08:38:40.418272262 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 08:38:41.117287   16153 machine.go:96] duration metric: took 4.748944653s to provisionDockerMachine
	I0917 08:38:41.117299   16153 client.go:171] duration metric: took 16.857187768s to LocalClient.Create
	I0917 08:38:41.117314   16153 start.go:167] duration metric: took 16.857244012s to libmachine.API.Create "addons-118348"
	I0917 08:38:41.117322   16153 start.go:293] postStartSetup for "addons-118348" (driver="docker")
	I0917 08:38:41.117332   16153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 08:38:41.117389   16153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 08:38:41.117427   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:41.133609   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:41.228941   16153 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 08:38:41.231827   16153 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 08:38:41.231854   16153 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 08:38:41.231862   16153 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 08:38:41.231868   16153 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0917 08:38:41.231878   16153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-8091/.minikube/addons for local assets ...
	I0917 08:38:41.231936   16153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-8091/.minikube/files for local assets ...
	I0917 08:38:41.231958   16153 start.go:296] duration metric: took 114.630368ms for postStartSetup
	I0917 08:38:41.232229   16153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118348
	I0917 08:38:41.248320   16153 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/config.json ...
	I0917 08:38:41.248642   16153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:38:41.248737   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:41.264623   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:41.353099   16153 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 08:38:41.356911   16153 start.go:128] duration metric: took 17.099047972s to createHost
	I0917 08:38:41.356931   16153 start.go:83] releasing machines lock for "addons-118348", held for 17.099169362s
	I0917 08:38:41.356981   16153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118348
	I0917 08:38:41.373005   16153 ssh_runner.go:195] Run: cat /version.json
	I0917 08:38:41.373050   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:41.373101   16153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 08:38:41.373174   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:41.390001   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:41.390261   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:41.480120   16153 ssh_runner.go:195] Run: systemctl --version
	I0917 08:38:41.572448   16153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 08:38:41.576513   16153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 08:38:41.598233   16153 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 08:38:41.598292   16153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:41.623399   16153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 08:38:41.623421   16153 start.go:495] detecting cgroup driver to use...
	I0917 08:38:41.623451   16153 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:41.623552   16153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:41.637945   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 08:38:41.646767   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 08:38:41.655342   16153 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 08:38:41.655406   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 08:38:41.663993   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 08:38:41.672564   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 08:38:41.681274   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 08:38:41.689756   16153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 08:38:41.697773   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 08:38:41.706361   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 08:38:41.715393   16153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 08:38:41.724141   16153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 08:38:41.731534   16153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 08:38:41.738995   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:41.812796   16153 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 08:38:41.899219   16153 start.go:495] detecting cgroup driver to use...
	I0917 08:38:41.899268   16153 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:41.899327   16153 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 08:38:41.910462   16153 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0917 08:38:41.910520   16153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 08:38:41.920463   16153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:41.934444   16153 ssh_runner.go:195] Run: which cri-dockerd
	I0917 08:38:41.937542   16153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 08:38:41.945753   16153 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 08:38:41.961435   16153 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 08:38:42.044624   16153 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 08:38:42.141996   16153 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 08:38:42.142133   16153 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 08:38:42.158856   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:42.233124   16153 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 08:38:42.479936   16153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 08:38:42.491044   16153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 08:38:42.501365   16153 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 08:38:42.577057   16153 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 08:38:42.651787   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:42.725849   16153 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 08:38:42.737777   16153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 08:38:42.747432   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:42.817904   16153 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 08:38:42.877001   16153 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 08:38:42.877082   16153 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 08:38:42.880348   16153 start.go:563] Will wait 60s for crictl version
	I0917 08:38:42.880397   16153 ssh_runner.go:195] Run: which crictl
	I0917 08:38:42.883417   16153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 08:38:42.913967   16153 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 08:38:42.914058   16153 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 08:38:42.935849   16153 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 08:38:42.960600   16153 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 08:38:42.960694   16153 cli_runner.go:164] Run: docker network inspect addons-118348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:42.976623   16153 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 08:38:42.979950   16153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:42.989610   16153 kubeadm.go:883] updating cluster {Name:addons-118348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-118348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 08:38:42.989714   16153 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 08:38:42.989759   16153 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 08:38:43.007240   16153 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 08:38:43.007263   16153 docker.go:615] Images already preloaded, skipping extraction
	I0917 08:38:43.007317   16153 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 08:38:43.025100   16153 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 08:38:43.025130   16153 cache_images.go:84] Images are preloaded, skipping loading
	I0917 08:38:43.025140   16153 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0917 08:38:43.025253   16153 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-118348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-118348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 08:38:43.025321   16153 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 08:38:43.068745   16153 cni.go:84] Creating CNI manager for ""
	I0917 08:38:43.068775   16153 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 08:38:43.068787   16153 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 08:38:43.068812   16153 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-118348 NodeName:addons-118348 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 08:38:43.069004   16153 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-118348"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 08:38:43.069072   16153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 08:38:43.076969   16153 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 08:38:43.077034   16153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 08:38:43.084770   16153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 08:38:43.100226   16153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 08:38:43.115895   16153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0917 08:38:43.131409   16153 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 08:38:43.134406   16153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:43.144055   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:43.218261   16153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:43.230466   16153 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348 for IP: 192.168.49.2
	I0917 08:38:43.230486   16153 certs.go:194] generating shared ca certs ...
	I0917 08:38:43.230501   16153 certs.go:226] acquiring lock for ca certs: {Name:mk3225b0343d01afc54a59e630093b9fbe48964d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.230607   16153 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-8091/.minikube/ca.key
	I0917 08:38:43.322611   16153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt ...
	I0917 08:38:43.322639   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt: {Name:mkcfc5e560f028dd268362f8000159a9120a365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.322799   16153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-8091/.minikube/ca.key ...
	I0917 08:38:43.322808   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/ca.key: {Name:mk32d8e669bcada4813d2823d783c47616f0c295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.322877   16153 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.key
	I0917 08:38:43.507926   16153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.crt ...
	I0917 08:38:43.507954   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.crt: {Name:mk5f73d943ca8acd15c7a26e7442fcfedfc6ebde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.508117   16153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.key ...
	I0917 08:38:43.508127   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.key: {Name:mk720d7c6f3866942f6fe9b02c1755e0caf82391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.508194   16153 certs.go:256] generating profile certs ...
	I0917 08:38:43.508250   16153 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.key
	I0917 08:38:43.508273   16153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt with IP's: []
	I0917 08:38:43.642041   16153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt ...
	I0917 08:38:43.642074   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: {Name:mk2c49dedb09e040b5b5ecc4dae40d1925164c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.642274   16153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.key ...
	I0917 08:38:43.642287   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.key: {Name:mke78e69e6ae7bfab20a29e99161db5897f8c80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.642381   16153 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key.29a37712
	I0917 08:38:43.642403   16153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt.29a37712 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 08:38:43.801001   16153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt.29a37712 ...
	I0917 08:38:43.801032   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt.29a37712: {Name:mk277794572cd2b73424aaa5162dbcb3fb7932df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.801216   16153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key.29a37712 ...
	I0917 08:38:43.801231   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key.29a37712: {Name:mkbc8142b378fbc7e450ffe2722399e15683315d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.801342   16153 certs.go:381] copying /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt.29a37712 -> /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt
	I0917 08:38:43.801414   16153 certs.go:385] copying /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key.29a37712 -> /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key
	I0917 08:38:43.801459   16153 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.key
	I0917 08:38:43.801473   16153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.crt with IP's: []
	I0917 08:38:43.886609   16153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.crt ...
	I0917 08:38:43.886641   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.crt: {Name:mkfaada3eea6c9655d10a1543f1611779c4f9ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.886822   16153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.key ...
	I0917 08:38:43.886835   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.key: {Name:mkfcbb701aaeaf6dda621c8f4d2e47391158cd34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:43.887026   16153 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 08:38:43.887060   16153 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/ca.pem (1082 bytes)
	I0917 08:38:43.887082   16153 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/cert.pem (1123 bytes)
	I0917 08:38:43.887103   16153 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-8091/.minikube/certs/key.pem (1679 bytes)
	I0917 08:38:43.887655   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 08:38:43.909591   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 08:38:43.931201   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 08:38:43.952558   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 08:38:43.973925   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 08:38:43.995669   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 08:38:44.017073   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 08:38:44.039192   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 08:38:44.060394   16153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 08:38:44.082554   16153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 08:38:44.098902   16153 ssh_runner.go:195] Run: openssl version
	I0917 08:38:44.103793   16153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 08:38:44.113170   16153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:44.116248   16153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:44.116308   16153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:44.122465   16153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 08:38:44.130892   16153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 08:38:44.133922   16153 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 08:38:44.133978   16153 kubeadm.go:392] StartCluster: {Name:addons-118348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-118348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:44.134072   16153 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 08:38:44.150773   16153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 08:38:44.158746   16153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 08:38:44.166506   16153 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 08:38:44.166565   16153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 08:38:44.174171   16153 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 08:38:44.174187   16153 kubeadm.go:157] found existing configuration files:
	
	I0917 08:38:44.174221   16153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 08:38:44.181699   16153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 08:38:44.181750   16153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 08:38:44.189264   16153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 08:38:44.197011   16153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 08:38:44.197064   16153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 08:38:44.205191   16153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 08:38:44.212779   16153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 08:38:44.212828   16153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 08:38:44.220338   16153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 08:38:44.228286   16153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 08:38:44.228339   16153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 08:38:44.236089   16153 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 08:38:44.270065   16153 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 08:38:44.270145   16153 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 08:38:44.289454   16153 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 08:38:44.289543   16153 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 08:38:44.289588   16153 kubeadm.go:310] OS: Linux
	I0917 08:38:44.289646   16153 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 08:38:44.289708   16153 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 08:38:44.289768   16153 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 08:38:44.289830   16153 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 08:38:44.289893   16153 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 08:38:44.289956   16153 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 08:38:44.290003   16153 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 08:38:44.290069   16153 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 08:38:44.290131   16153 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 08:38:44.338058   16153 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 08:38:44.338174   16153 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 08:38:44.338306   16153 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 08:38:44.348322   16153 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 08:38:44.352031   16153 out.go:235]   - Generating certificates and keys ...
	I0917 08:38:44.352132   16153 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 08:38:44.352203   16153 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 08:38:44.392301   16153 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 08:38:44.500596   16153 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 08:38:44.656895   16153 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 08:38:44.858169   16153 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 08:38:45.200292   16153 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 08:38:45.200427   16153 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-118348 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:45.292627   16153 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 08:38:45.292828   16153 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-118348 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:45.930538   16153 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 08:38:46.146360   16153 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 08:38:46.259979   16153 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 08:38:46.260095   16153 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 08:38:46.502352   16153 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 08:38:46.552326   16153 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 08:38:46.679109   16153 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 08:38:47.179140   16153 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 08:38:47.339579   16153 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 08:38:47.340068   16153 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 08:38:47.342651   16153 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 08:38:47.344585   16153 out.go:235]   - Booting up control plane ...
	I0917 08:38:47.344679   16153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 08:38:47.344778   16153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 08:38:47.344855   16153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 08:38:47.353131   16153 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 08:38:47.357910   16153 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 08:38:47.357974   16153 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 08:38:47.440992   16153 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 08:38:47.441123   16153 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 08:38:48.442298   16153 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001379947s
	I0917 08:38:48.442387   16153 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 08:38:52.944454   16153 kubeadm.go:310] [api-check] The API server is healthy after 4.502184668s
	I0917 08:38:52.956023   16153 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 08:38:52.967288   16153 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 08:38:52.984977   16153 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 08:38:52.985204   16153 kubeadm.go:310] [mark-control-plane] Marking the node addons-118348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 08:38:52.993123   16153 kubeadm.go:310] [bootstrap-token] Using token: 6dqjjf.f0nxv5t7pz6fpmqz
	I0917 08:38:52.994681   16153 out.go:235]   - Configuring RBAC rules ...
	I0917 08:38:52.994786   16153 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 08:38:52.998509   16153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 08:38:53.007331   16153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 08:38:53.010197   16153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 08:38:53.012942   16153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 08:38:53.015405   16153 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 08:38:53.350764   16153 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 08:38:53.796981   16153 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 08:38:54.352177   16153 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 08:38:54.353007   16153 kubeadm.go:310] 
	I0917 08:38:54.353080   16153 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 08:38:54.353112   16153 kubeadm.go:310] 
	I0917 08:38:54.353241   16153 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 08:38:54.353252   16153 kubeadm.go:310] 
	I0917 08:38:54.353310   16153 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 08:38:54.353422   16153 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 08:38:54.353499   16153 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 08:38:54.353514   16153 kubeadm.go:310] 
	I0917 08:38:54.353589   16153 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 08:38:54.353599   16153 kubeadm.go:310] 
	I0917 08:38:54.353661   16153 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 08:38:54.353670   16153 kubeadm.go:310] 
	I0917 08:38:54.353739   16153 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 08:38:54.353850   16153 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 08:38:54.353948   16153 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 08:38:54.353957   16153 kubeadm.go:310] 
	I0917 08:38:54.354079   16153 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 08:38:54.354160   16153 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 08:38:54.354166   16153 kubeadm.go:310] 
	I0917 08:38:54.354258   16153 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dqjjf.f0nxv5t7pz6fpmqz \
	I0917 08:38:54.354377   16153 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f39726bcd9567e166cfea3356daada82e147516a8cdf8435266f67c8416305f5 \
	I0917 08:38:54.354403   16153 kubeadm.go:310] 	--control-plane 
	I0917 08:38:54.354411   16153 kubeadm.go:310] 
	I0917 08:38:54.354516   16153 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 08:38:54.354525   16153 kubeadm.go:310] 
	I0917 08:38:54.354639   16153 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dqjjf.f0nxv5t7pz6fpmqz \
	I0917 08:38:54.354771   16153 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f39726bcd9567e166cfea3356daada82e147516a8cdf8435266f67c8416305f5 
	I0917 08:38:54.356650   16153 kubeadm.go:310] W0917 08:38:44.267501    1920 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:54.356957   16153 kubeadm.go:310] W0917 08:38:44.268119    1920 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:54.357203   16153 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 08:38:54.357399   16153 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 08:38:54.357419   16153 cni.go:84] Creating CNI manager for ""
	I0917 08:38:54.357431   16153 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 08:38:54.359202   16153 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 08:38:54.360435   16153 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 08:38:54.368904   16153 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 08:38:54.385480   16153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 08:38:54.385617   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.385646   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-118348 minikube.k8s.io/updated_at=2024_09_17T08_38_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-118348 minikube.k8s.io/primary=true
	I0917 08:38:54.476282   16153 ops.go:34] apiserver oom_adj: -16
	I0917 08:38:54.476389   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.976985   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.476465   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.976820   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.477361   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.976761   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.477019   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.977010   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:58.476795   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:58.976866   16153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:59.039660   16153 kubeadm.go:1113] duration metric: took 4.654098927s to wait for elevateKubeSystemPrivileges
	I0917 08:38:59.039698   16153 kubeadm.go:394] duration metric: took 14.905723458s to StartCluster
	I0917 08:38:59.039720   16153 settings.go:142] acquiring lock: {Name:mk862a5c46e81240a806a4c66f0c2efde4cdc586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:59.039845   16153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:38:59.040228   16153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-8091/kubeconfig: {Name:mk91a6d671e6a7ab453b1c24cade89fd7db9b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:59.040431   16153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 08:38:59.040449   16153 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 08:38:59.040517   16153 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 08:38:59.040628   16153 addons.go:69] Setting yakd=true in profile "addons-118348"
	I0917 08:38:59.040639   16153 addons.go:69] Setting cloud-spanner=true in profile "addons-118348"
	I0917 08:38:59.040633   16153 addons.go:69] Setting default-storageclass=true in profile "addons-118348"
	I0917 08:38:59.040653   16153 addons.go:234] Setting addon cloud-spanner=true in "addons-118348"
	I0917 08:38:59.040661   16153 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-118348"
	I0917 08:38:59.040662   16153 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-118348"
	I0917 08:38:59.040666   16153 addons.go:69] Setting metrics-server=true in profile "addons-118348"
	I0917 08:38:59.040678   16153 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-118348"
	I0917 08:38:59.040706   16153 addons.go:234] Setting addon metrics-server=true in "addons-118348"
	I0917 08:38:59.040708   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.040701   16153 addons.go:69] Setting ingress=true in profile "addons-118348"
	I0917 08:38:59.040719   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.040728   16153 addons.go:234] Setting addon ingress=true in "addons-118348"
	I0917 08:38:59.040736   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.040742   16153 config.go:182] Loaded profile config "addons-118348": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:38:59.040771   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.040791   16153 addons.go:69] Setting gcp-auth=true in profile "addons-118348"
	I0917 08:38:59.040807   16153 mustload.go:65] Loading cluster: addons-118348
	I0917 08:38:59.040835   16153 addons.go:69] Setting registry=true in profile "addons-118348"
	I0917 08:38:59.040845   16153 addons.go:234] Setting addon registry=true in "addons-118348"
	I0917 08:38:59.040862   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.040942   16153 config.go:182] Loaded profile config "addons-118348": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:38:59.041053   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041172   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041182   16153 addons.go:69] Setting helm-tiller=true in profile "addons-118348"
	I0917 08:38:59.041192   16153 addons.go:234] Setting addon helm-tiller=true in "addons-118348"
	I0917 08:38:59.041210   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.041234   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041266   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041286   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041405   16153 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-118348"
	I0917 08:38:59.041421   16153 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-118348"
	I0917 08:38:59.041494   16153 addons.go:69] Setting volcano=true in profile "addons-118348"
	I0917 08:38:59.041514   16153 addons.go:234] Setting addon volcano=true in "addons-118348"
	I0917 08:38:59.041540   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.041622   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041714   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.040654   16153 addons.go:234] Setting addon yakd=true in "addons-118348"
	I0917 08:38:59.042186   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.042204   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.042499   16153 addons.go:69] Setting volumesnapshots=true in profile "addons-118348"
	I0917 08:38:59.042516   16153 addons.go:234] Setting addon volumesnapshots=true in "addons-118348"
	I0917 08:38:59.042537   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.043112   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.041173   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.043893   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.045790   16153 out.go:177] * Verifying Kubernetes components...
	I0917 08:38:59.043913   16153 addons.go:69] Setting storage-provisioner=true in profile "addons-118348"
	I0917 08:38:59.047858   16153 addons.go:234] Setting addon storage-provisioner=true in "addons-118348"
	I0917 08:38:59.047909   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.043953   16153 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-118348"
	I0917 08:38:59.048133   16153 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-118348"
	I0917 08:38:59.043991   16153 addons.go:69] Setting ingress-dns=true in profile "addons-118348"
	I0917 08:38:59.048159   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.048180   16153 addons.go:234] Setting addon ingress-dns=true in "addons-118348"
	I0917 08:38:59.048272   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.044043   16153 addons.go:69] Setting inspektor-gadget=true in profile "addons-118348"
	I0917 08:38:59.048365   16153 addons.go:234] Setting addon inspektor-gadget=true in "addons-118348"
	I0917 08:38:59.048443   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.048600   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.048822   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.050189   16153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:59.074360   16153 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 08:38:59.075732   16153 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 08:38:59.075793   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 08:38:59.075884   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.077062   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.077063   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.077536   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.080536   16153 addons.go:234] Setting addon default-storageclass=true in "addons-118348"
	I0917 08:38:59.080584   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.081293   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.083350   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.086783   16153 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 08:38:59.088333   16153 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 08:38:59.088665   16153 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:59.088705   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 08:38:59.088759   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.098760   16153 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 08:38:59.103323   16153 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 08:38:59.103354   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 08:38:59.103413   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.116027   16153 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:59.117768   16153 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:59.119003   16153 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 08:38:59.120374   16153 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:59.120395   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 08:38:59.120456   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.128859   16153 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:59.128896   16153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 08:38:59.128964   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.131084   16153 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 08:38:59.132421   16153 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 08:38:59.132449   16153 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 08:38:59.132518   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.137018   16153 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 08:38:59.138509   16153 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:59.138537   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 08:38:59.138604   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.141184   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 08:38:59.142583   16153 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 08:38:59.142586   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 08:38:59.142697   16153 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 08:38:59.142775   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.143969   16153 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 08:38:59.143989   16153 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 08:38:59.144042   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.148643   16153 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 08:38:59.150999   16153 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 08:38:59.152301   16153 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 08:38:59.155205   16153 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 08:38:59.155228   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 08:38:59.155280   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.158119   16153 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-118348"
	I0917 08:38:59.158168   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:38:59.158644   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:38:59.160443   16153 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 08:38:59.161682   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 08:38:59.161810   16153 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:59.161835   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 08:38:59.161907   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.163771   16153 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 08:38:59.164887   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 08:38:59.165013   16153 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 08:38:59.165233   16153 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 08:38:59.165314   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.167546   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 08:38:59.171149   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 08:38:59.189535   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 08:38:59.190143   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.195295   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 08:38:59.195514   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.196473   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.196572   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.196864   16153 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 08:38:59.198382   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.198512   16153 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:59.198532   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 08:38:59.198598   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.198762   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 08:38:59.200219   16153 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 08:38:59.201421   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 08:38:59.201441   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 08:38:59.201512   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.219412   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.222531   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.228414   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.228414   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.235142   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.236852   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.239129   16153 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 08:38:59.239975   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.241610   16153 out.go:177]   - Using image docker.io/busybox:stable
	I0917 08:38:59.242843   16153 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:59.242858   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 08:38:59.242905   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:38:59.247419   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.248960   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:38:59.260062   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	W0917 08:38:59.276011   16153 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 08:38:59.276054   16153 retry.go:31] will retry after 166.260019ms: ssh: handshake failed: EOF
	I0917 08:38:59.505713   16153 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 08:38:59.505732   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 08:38:59.573417   16153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:59.573441   16153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 08:38:59.600205   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:59.674847   16153 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 08:38:59.674933   16153 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 08:38:59.676026   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:59.693102   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:59.695785   16153 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 08:38:59.695858   16153 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 08:38:59.781745   16153 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 08:38:59.781779   16153 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 08:38:59.789613   16153 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 08:38:59.789704   16153 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 08:38:59.791786   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:59.873323   16153 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 08:38:59.873413   16153 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 08:38:59.882197   16153 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 08:38:59.882284   16153 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 08:38:59.890265   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:59.974407   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:59.976672   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:59.993100   16153 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:59.993130   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 08:38:59.994854   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 08:39:00.076404   16153 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 08:39:00.076495   16153 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 08:39:00.076749   16153 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 08:39:00.076791   16153 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 08:39:00.083234   16153 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:39:00.083260   16153 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 08:39:00.086509   16153 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:39:00.086592   16153 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 08:39:00.174851   16153 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 08:39:00.174944   16153 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 08:39:00.175368   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 08:39:00.175430   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 08:39:00.387881   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:39:00.391769   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:39:00.490067   16153 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 08:39:00.490099   16153 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 08:39:00.580527   16153 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 08:39:00.580569   16153 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 08:39:00.673435   16153 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 08:39:00.673467   16153 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 08:39:00.681909   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 08:39:00.681938   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 08:39:00.784617   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:39:00.992678   16153 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 08:39:00.992735   16153 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 08:39:00.995959   16153 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:39:00.995988   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 08:39:01.279999   16153 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.706361447s)
	I0917 08:39:01.280042   16153 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 08:39:01.280298   16153 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7067792s)
	I0917 08:39:01.281318   16153 node_ready.go:35] waiting up to 6m0s for node "addons-118348" to be "Ready" ...
	I0917 08:39:01.284205   16153 node_ready.go:49] node "addons-118348" has status "Ready":"True"
	I0917 08:39:01.284235   16153 node_ready.go:38] duration metric: took 2.882443ms for node "addons-118348" to be "Ready" ...
	I0917 08:39:01.284246   16153 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:01.294259   16153 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:01.297441   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 08:39:01.297474   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 08:39:01.491379   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 08:39:01.491410   16153 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 08:39:01.678875   16153 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 08:39:01.678906   16153 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 08:39:01.691507   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 08:39:01.691535   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 08:39:01.787825   16153 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-118348" context rescaled to 1 replicas
	I0917 08:39:01.878797   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:39:02.379637   16153 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 08:39:02.379727   16153 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 08:39:02.590631   16153 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:02.590724   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 08:39:02.982952   16153 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 08:39:02.982982   16153 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 08:39:03.375628   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:03.576041   16153 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:03.576118   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 08:39:03.576375   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.976139353s)
	I0917 08:39:03.576468   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.900368954s)
	I0917 08:39:03.576791   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.88359943s)
	I0917 08:39:03.576883   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.785026119s)
	I0917 08:39:03.576955   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.686609221s)
	I0917 08:39:03.588988   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 08:39:03.589093   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 08:39:03.783768   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:03.793536   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 08:39:03.793618   16153 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 08:39:03.873440   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:04.184222   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 08:39:04.184256   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 08:39:04.584066   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 08:39:04.584096   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 08:39:05.183635   16153 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:05.183729   16153 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 08:39:05.378493   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:05.483860   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:05.488267   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.513774328s)
	I0917 08:39:06.091339   16153 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 08:39:06.091417   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:39:06.111586   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:39:06.979746   16153 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 08:39:07.382452   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:07.476557   16153 addons.go:234] Setting addon gcp-auth=true in "addons-118348"
	I0917 08:39:07.476740   16153 host.go:66] Checking if "addons-118348" exists ...
	I0917 08:39:07.477304   16153 cli_runner.go:164] Run: docker container inspect addons-118348 --format={{.State.Status}}
	I0917 08:39:07.505101   16153 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 08:39:07.505159   16153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118348
	I0917 08:39:07.522879   16153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/addons-118348/id_rsa Username:docker}
	I0917 08:39:08.791402   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.814611517s)
	I0917 08:39:08.791526   16153 addons.go:475] Verifying addon ingress=true in "addons-118348"
	I0917 08:39:08.794143   16153 out.go:177] * Verifying ingress addon...
	I0917 08:39:08.797657   16153 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 08:39:08.801435   16153 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 08:39:08.801494   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.378157   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.882501   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.889642   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:10.383665   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.881625   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.377600   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.803330   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.890541   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:11.895607   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.90071276s)
	I0917 08:39:11.895789   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.507869181s)
	I0917 08:39:11.895816   16153 addons.go:475] Verifying addon metrics-server=true in "addons-118348"
	I0917 08:39:11.895910   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.504082402s)
	I0917 08:39:11.895944   16153 addons.go:475] Verifying addon registry=true in "addons-118348"
	I0917 08:39:11.896030   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.111369791s)
	I0917 08:39:11.896072   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.017176357s)
	I0917 08:39:11.896308   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.11243331s)
	W0917 08:39:11.896404   16153 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:11.896427   16153 retry.go:31] will retry after 322.153826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:11.896433   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.02289686s)
	I0917 08:39:11.897620   16153 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-118348 service yakd-dashboard -n yakd-dashboard
	
	I0917 08:39:11.897734   16153 out.go:177] * Verifying registry addon...
	I0917 08:39:11.899790   16153 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 08:39:11.976198   16153 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:11.976232   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.219600   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:12.302804   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.491547   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.801971   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.904268   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.186191   16153 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.681058063s)
	I0917 08:39:13.186448   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.7022062s)
	I0917 08:39:13.186488   16153 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-118348"
	I0917 08:39:13.188467   16153 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:39:13.188469   16153 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 08:39:13.189927   16153 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 08:39:13.190864   16153 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 08:39:13.191217   16153 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 08:39:13.191240   16153 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 08:39:13.197784   16153 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:13.197815   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.286180   16153 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 08:39:13.286207   16153 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 08:39:13.301069   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.374751   16153 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:13.374829   16153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 08:39:13.398688   16153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:13.407581   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.697225   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.876265   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.976651   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.195368   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.301336   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:14.374564   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.476158   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.696223   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.801195   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.805561   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.585910854s)
	I0917 08:39:14.903124   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.989553   16153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.590823205s)
	I0917 08:39:14.990773   16153 addons.go:475] Verifying addon gcp-auth=true in "addons-118348"
	I0917 08:39:14.992673   16153 out.go:177] * Verifying gcp-auth addon...
	I0917 08:39:14.995175   16153 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 08:39:15.003474   16153 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:39:15.196047   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.301091   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.403998   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.695926   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.800975   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.903517   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.196433   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.300641   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.403886   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.695937   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.800195   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:16.801159   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.903787   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.195800   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.300614   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.404447   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.695753   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.801089   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.903504   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.240207   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.300778   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.403201   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.694615   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.800937   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.902955   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.195447   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.300374   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:19.301314   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.404212   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.695526   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.801550   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.903670   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.196313   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.301097   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.403515   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.695785   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.801697   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.903754   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.196407   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.300444   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:21.301884   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.403475   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.696976   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.801174   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.903760   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.195237   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.302075   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.403564   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.695762   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.801078   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.903394   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.195852   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.300985   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.403672   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.695507   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.799824   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:23.801331   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.903684   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.195953   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.301016   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.403280   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.695823   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.801714   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.903787   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.196103   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.301728   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.403545   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.696288   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.800494   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:25.801279   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.903645   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.195640   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.301106   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.403181   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.695080   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.801986   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.904869   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.196098   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.300924   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.403834   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.703389   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.801061   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.905185   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.195535   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.300471   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:28.301181   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.403736   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.695445   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.806196   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.977196   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.195010   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.300944   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.403058   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.695632   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.801045   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.903518   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.235969   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.337102   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.402952   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.695584   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.800195   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:30.801344   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.903928   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.196099   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.300553   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.402910   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.696553   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.801477   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.903989   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.196221   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.301328   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.404350   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.695867   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.801174   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.903454   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.196052   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.300720   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:33.301711   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.404101   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.696008   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.801185   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.904266   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.195896   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.300747   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.403215   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.695586   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.800528   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.902669   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.195847   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.300842   16153 pod_ready.go:103] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:35.301012   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.403775   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.696024   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.801514   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.903751   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.195985   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.300801   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.403241   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.695555   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.800821   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.903255   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.197023   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.300348   16153 pod_ready.go:93] pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.300374   16153 pod_ready.go:82] duration metric: took 36.006078672s for pod "coredns-7c65d6cfc9-25csd" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.300387   16153 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fd7jf" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.301524   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.302193   16153 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-fd7jf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fd7jf" not found
	I0917 08:39:37.302215   16153 pod_ready.go:82] duration metric: took 1.819791ms for pod "coredns-7c65d6cfc9-fd7jf" in "kube-system" namespace to be "Ready" ...
	E0917 08:39:37.302227   16153 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-fd7jf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fd7jf" not found
	I0917 08:39:37.302236   16153 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.306831   16153 pod_ready.go:93] pod "etcd-addons-118348" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.306853   16153 pod_ready.go:82] duration metric: took 4.609235ms for pod "etcd-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.306866   16153 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.311486   16153 pod_ready.go:93] pod "kube-apiserver-addons-118348" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.311511   16153 pod_ready.go:82] duration metric: took 4.63615ms for pod "kube-apiserver-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.311524   16153 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.316429   16153 pod_ready.go:93] pod "kube-controller-manager-addons-118348" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.316450   16153 pod_ready.go:82] duration metric: took 4.917819ms for pod "kube-controller-manager-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.316459   16153 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kbbwc" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.403933   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.498298   16153 pod_ready.go:93] pod "kube-proxy-kbbwc" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.498331   16153 pod_ready.go:82] duration metric: took 181.864769ms for pod "kube-proxy-kbbwc" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.498344   16153 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.695245   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.801443   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.898198   16153 pod_ready.go:93] pod "kube-scheduler-addons-118348" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:37.898221   16153 pod_ready.go:82] duration metric: took 399.869471ms for pod "kube-scheduler-addons-118348" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.898232   16153 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sghds" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:37.902525   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.301302   16153 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sghds" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:38.301335   16153 pod_ready.go:82] duration metric: took 403.095412ms for pod "nvidia-device-plugin-daemonset-sghds" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:38.301345   16153 pod_ready.go:39] duration metric: took 37.017085561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:38.301368   16153 api_server.go:52] waiting for apiserver process to appear ...
	I0917 08:39:38.301428   16153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:39:38.302262   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.302424   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.315708   16153 api_server.go:72] duration metric: took 39.275221698s to wait for apiserver process to appear ...
	I0917 08:39:38.315733   16153 api_server.go:88] waiting for apiserver healthz status ...
	I0917 08:39:38.315756   16153 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 08:39:38.319961   16153 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 08:39:38.320776   16153 api_server.go:141] control plane version: v1.31.1
	I0917 08:39:38.320797   16153 api_server.go:131] duration metric: took 5.057934ms to wait for apiserver health ...
	I0917 08:39:38.320804   16153 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 08:39:38.403547   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.504076   16153 system_pods.go:59] 18 kube-system pods found
	I0917 08:39:38.504116   16153 system_pods.go:61] "coredns-7c65d6cfc9-25csd" [74c6e7e6-faf9-4bf7-9fa9-534033b67fba] Running
	I0917 08:39:38.504127   16153 system_pods.go:61] "csi-hostpath-attacher-0" [73ce30f8-48fd-4ad4-baf1-931bcb63ef19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 08:39:38.504135   16153 system_pods.go:61] "csi-hostpath-resizer-0" [f17ba8bf-d93d-4035-a806-a89e1efd8207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 08:39:38.504145   16153 system_pods.go:61] "csi-hostpathplugin-fdkjh" [3613a36d-0f3a-4229-9fa1-dd07229fc18e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 08:39:38.504159   16153 system_pods.go:61] "etcd-addons-118348" [c46ee4c9-ef8b-4058-9b7d-68c96e840ec2] Running
	I0917 08:39:38.504165   16153 system_pods.go:61] "kube-apiserver-addons-118348" [b3a3e80d-da7d-4156-8dd0-175c76c46ee1] Running
	I0917 08:39:38.504171   16153 system_pods.go:61] "kube-controller-manager-addons-118348" [abd17890-f547-4618-82eb-0e928bc66c10] Running
	I0917 08:39:38.504176   16153 system_pods.go:61] "kube-ingress-dns-minikube" [62aae420-556f-422c-92b2-ee34cf2cf9df] Running
	I0917 08:39:38.504181   16153 system_pods.go:61] "kube-proxy-kbbwc" [0ad6532a-3039-47bf-9e87-8cb8503cb75b] Running
	I0917 08:39:38.504185   16153 system_pods.go:61] "kube-scheduler-addons-118348" [86664b5d-c607-4a88-be3e-8839052425e3] Running
	I0917 08:39:38.504192   16153 system_pods.go:61] "metrics-server-84c5f94fbc-9dxps" [3646ec2c-2273-4bf7-af3e-a3dfe0d91552] Running
	I0917 08:39:38.504200   16153 system_pods.go:61] "nvidia-device-plugin-daemonset-sghds" [1dd15af2-9e1e-4296-99f7-992a66fc0483] Running
	I0917 08:39:38.504205   16153 system_pods.go:61] "registry-66c9cd494c-t5sv4" [2f41b6f7-f293-467f-8215-b24af50ec8ba] Running
	I0917 08:39:38.504213   16153 system_pods.go:61] "registry-proxy-z9ss9" [29edb9a3-341b-486a-8045-5546e8911d8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 08:39:38.504228   16153 system_pods.go:61] "snapshot-controller-56fcc65765-bdpj5" [a10b3217-9bdc-40fd-8d71-3b72e5228e60] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 08:39:38.504238   16153 system_pods.go:61] "snapshot-controller-56fcc65765-t4r9g" [d3c2bbc8-bdca-4d5d-bed7-90bc3f95662b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 08:39:38.504247   16153 system_pods.go:61] "storage-provisioner" [7704bba1-fdaa-4b34-8549-a04eb5b45b4a] Running
	I0917 08:39:38.504255   16153 system_pods.go:61] "tiller-deploy-b48cc5f79-ng9ss" [cad55614-1ecf-4037-9bff-258c6c00984a] Running
	I0917 08:39:38.504266   16153 system_pods.go:74] duration metric: took 183.454802ms to wait for pod list to return data ...
	I0917 08:39:38.504278   16153 default_sa.go:34] waiting for default service account to be created ...
	I0917 08:39:38.697019   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.697288   16153 default_sa.go:45] found service account: "default"
	I0917 08:39:38.697310   16153 default_sa.go:55] duration metric: took 193.024971ms for default service account to be created ...
	I0917 08:39:38.697320   16153 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 08:39:38.801769   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.903378   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.904118   16153 system_pods.go:86] 18 kube-system pods found
	I0917 08:39:38.904139   16153 system_pods.go:89] "coredns-7c65d6cfc9-25csd" [74c6e7e6-faf9-4bf7-9fa9-534033b67fba] Running
	I0917 08:39:38.904149   16153 system_pods.go:89] "csi-hostpath-attacher-0" [73ce30f8-48fd-4ad4-baf1-931bcb63ef19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 08:39:38.904155   16153 system_pods.go:89] "csi-hostpath-resizer-0" [f17ba8bf-d93d-4035-a806-a89e1efd8207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 08:39:38.904163   16153 system_pods.go:89] "csi-hostpathplugin-fdkjh" [3613a36d-0f3a-4229-9fa1-dd07229fc18e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 08:39:38.904171   16153 system_pods.go:89] "etcd-addons-118348" [c46ee4c9-ef8b-4058-9b7d-68c96e840ec2] Running
	I0917 08:39:38.904176   16153 system_pods.go:89] "kube-apiserver-addons-118348" [b3a3e80d-da7d-4156-8dd0-175c76c46ee1] Running
	I0917 08:39:38.904180   16153 system_pods.go:89] "kube-controller-manager-addons-118348" [abd17890-f547-4618-82eb-0e928bc66c10] Running
	I0917 08:39:38.904187   16153 system_pods.go:89] "kube-ingress-dns-minikube" [62aae420-556f-422c-92b2-ee34cf2cf9df] Running
	I0917 08:39:38.904193   16153 system_pods.go:89] "kube-proxy-kbbwc" [0ad6532a-3039-47bf-9e87-8cb8503cb75b] Running
	I0917 08:39:38.904199   16153 system_pods.go:89] "kube-scheduler-addons-118348" [86664b5d-c607-4a88-be3e-8839052425e3] Running
	I0917 08:39:38.904203   16153 system_pods.go:89] "metrics-server-84c5f94fbc-9dxps" [3646ec2c-2273-4bf7-af3e-a3dfe0d91552] Running
	I0917 08:39:38.904207   16153 system_pods.go:89] "nvidia-device-plugin-daemonset-sghds" [1dd15af2-9e1e-4296-99f7-992a66fc0483] Running
	I0917 08:39:38.904210   16153 system_pods.go:89] "registry-66c9cd494c-t5sv4" [2f41b6f7-f293-467f-8215-b24af50ec8ba] Running
	I0917 08:39:38.904216   16153 system_pods.go:89] "registry-proxy-z9ss9" [29edb9a3-341b-486a-8045-5546e8911d8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 08:39:38.904225   16153 system_pods.go:89] "snapshot-controller-56fcc65765-bdpj5" [a10b3217-9bdc-40fd-8d71-3b72e5228e60] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 08:39:38.904232   16153 system_pods.go:89] "snapshot-controller-56fcc65765-t4r9g" [d3c2bbc8-bdca-4d5d-bed7-90bc3f95662b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 08:39:38.904238   16153 system_pods.go:89] "storage-provisioner" [7704bba1-fdaa-4b34-8549-a04eb5b45b4a] Running
	I0917 08:39:38.904242   16153 system_pods.go:89] "tiller-deploy-b48cc5f79-ng9ss" [cad55614-1ecf-4037-9bff-258c6c00984a] Running
	I0917 08:39:38.904248   16153 system_pods.go:126] duration metric: took 206.922779ms to wait for k8s-apps to be running ...
	I0917 08:39:38.904258   16153 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 08:39:38.904339   16153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:39:38.917345   16153 system_svc.go:56] duration metric: took 13.075694ms WaitForService to wait for kubelet
	I0917 08:39:38.917378   16153 kubeadm.go:582] duration metric: took 39.876897298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:39:38.917401   16153 node_conditions.go:102] verifying NodePressure condition ...
	I0917 08:39:39.098073   16153 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 08:39:39.098105   16153 node_conditions.go:123] node cpu capacity is 8
	I0917 08:39:39.098123   16153 node_conditions.go:105] duration metric: took 180.71542ms to run NodePressure ...
	I0917 08:39:39.098138   16153 start.go:241] waiting for startup goroutines ...
	I0917 08:39:39.195791   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.302791   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.403872   16153 kapi.go:107] duration metric: took 27.504080322s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 08:39:39.695436   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.802393   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.195103   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.301401   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.730516   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.831208   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.195300   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.302491   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.696171   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.829296   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.195488   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.301552   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.697259   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.801660   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.195280   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.301546   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.695979   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.801682   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.196023   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.301946   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.696613   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.802429   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.195525   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.301467   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.695966   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.802561   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.196104   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.301945   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.696438   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.825571   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.195080   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.301720   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.696178   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.802045   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.195035   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.301631   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.694986   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.801976   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.195128   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.302646   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.696354   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.802446   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.196238   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.301860   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.696185   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.802423   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.195880   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.302498   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.696568   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.806601   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.195437   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.302602   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.695992   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.802752   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.195662   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.301968   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.695313   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.802173   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.203372   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.301608   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.695195   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.801893   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.195571   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.302370   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.695188   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.802351   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.195986   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.301661   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.694791   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.801629   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.195174   16153 kapi.go:107] duration metric: took 44.004310731s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 08:39:57.301598   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.802687   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.301850   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.801119   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.301511   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.802119   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.301838   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.801943   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.301426   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.802312   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.301115   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.801956   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.301756   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.802655   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.301452   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.802272   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.301862   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.801855   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.301478   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.801733   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.302056   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.801947   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.301856   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.800916   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.301262   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.802065   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.301019   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.802132   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.301777   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.802855   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.301795   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.802699   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.302893   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.801796   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.301200   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.802589   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.302018   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.802881   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.302145   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.801783   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.301850   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.802517   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.302340   16153 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.800977   16153 kapi.go:107] duration metric: took 1m10.003320251s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 08:40:37.000995   16153 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:40:37.001020   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:37.498839   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:37.999234   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:38.497864   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:38.999407   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:39.498109   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:40.000492   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:40.498386   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:40.998451   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:41.498255   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:41.998602   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:42.498448   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:42.998865   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:43.498896   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:44.024262   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:44.498031   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:45.001240   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:45.497726   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:45.998983   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:46.498886   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:47.000667   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:47.498370   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:47.998495   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:48.498487   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:48.998690   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:49.499086   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:49.998549   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:50.497985   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:51.001190   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:51.498227   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:52.000369   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:52.498649   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:52.998951   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:53.498953   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:54.000937   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:54.499007   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:54.999326   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:55.497932   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:55.999432   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:56.498395   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:56.998815   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:57.498643   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:57.999117   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:58.498698   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:58.998740   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:59.498680   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:59.999109   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:00.498965   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:00.999078   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:01.497822   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:02.001306   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:02.498441   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:03.000381   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:03.498158   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:03.998500   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:04.498506   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:05.000068   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:05.497808   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:06.000568   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:06.498681   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:06.998371   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:07.498473   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:07.998732   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:08.498790   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:09.000611   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:09.498584   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:10.000713   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:10.498468   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:11.000596   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:11.498422   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:11.998431   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:12.498404   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:12.998461   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:13.498426   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:14.001236   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:14.498302   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:14.998380   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:15.498510   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:15.998709   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:16.498753   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:17.000790   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:17.498782   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:18.000746   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:18.499228   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:19.000539   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:19.498787   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:19.999307   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:20.498234   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:21.000895   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:21.498207   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:21.998612   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:22.498774   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:23.000562   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:23.498424   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:23.998669   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:24.498853   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:24.999126   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:25.498070   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:26.001037   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:26.498709   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:27.001781   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:27.498891   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:27.999036   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:28.498858   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:28.998778   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:29.498500   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:29.998060   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:30.497718   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:30.998714   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:31.498270   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:32.000792   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:32.498578   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:33.000358   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:33.498107   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:34.000251   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:34.498255   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:34.998231   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:35.498260   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:35.998631   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:36.498345   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:36.998180   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:37.498406   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:37.999412   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:38.497992   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:38.998965   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:39.497768   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:40.000952   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:40.498410   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:41.000926   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:41.499104   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:42.001070   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:42.497835   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:42.999533   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:43.498244   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:43.998439   16153 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:41:44.498642   16153 kapi.go:107] duration metric: took 2m29.50346527s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 08:41:44.500599   16153 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-118348 cluster.
	I0917 08:41:44.502087   16153 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 08:41:44.503315   16153 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 08:41:44.504843   16153 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, storage-provisioner-rancher, volcano, metrics-server, helm-tiller, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 08:41:44.506400   16153 addons.go:510] duration metric: took 2m45.465894838s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns cloud-spanner default-storageclass storage-provisioner-rancher volcano metrics-server helm-tiller inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 08:41:44.506453   16153 start.go:246] waiting for cluster config update ...
	I0917 08:41:44.506486   16153 start.go:255] writing updated cluster config ...
	I0917 08:41:44.506795   16153 ssh_runner.go:195] Run: rm -f paused
	I0917 08:41:44.554434   16153 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 08:41:44.556262   16153 out.go:177] * Done! kubectl is now configured to use "addons-118348" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 08:51:10 addons-118348 dockerd[1336]: time="2024-09-17T08:51:10.973742658Z" level=info msg="ignoring event" container=cc3fbeb2435cc6c4788b18c17bae5c68840eccb43158fabff407fe3a0b337d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:11 addons-118348 dockerd[1336]: time="2024-09-17T08:51:11.112830939Z" level=info msg="ignoring event" container=5bea26d60bddb65cf782a4e101a6ce375d64ca9756ed19dfefcc38a3d05423bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:11 addons-118348 dockerd[1336]: time="2024-09-17T08:51:11.199655217Z" level=info msg="ignoring event" container=ce2f10f9d98f6c27aedb86605306b03715140f3e48c59d1310e41747b4a7dc69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:11 addons-118348 dockerd[1336]: time="2024-09-17T08:51:11.274295423Z" level=info msg="ignoring event" container=288210703f58c42010f08593054289bf2d51e7672da207c991e080596682d56c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:11 addons-118348 dockerd[1336]: time="2024-09-17T08:51:11.274597037Z" level=info msg="ignoring event" container=f92be85642967b20987efd3b92788cbde05ac27f455ca9dd4b2f987761334625 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:11 addons-118348 cri-dockerd[1601]: time="2024-09-17T08:51:11Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 17 08:51:15 addons-118348 dockerd[1336]: time="2024-09-17T08:51:15.525865827Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=229912276e6a1ada3f17bfdb913a4925e0c346a45c48d462fc006bfe59ce64c0
	Sep 17 08:51:15 addons-118348 dockerd[1336]: time="2024-09-17T08:51:15.569849678Z" level=info msg="ignoring event" container=229912276e6a1ada3f17bfdb913a4925e0c346a45c48d462fc006bfe59ce64c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:15 addons-118348 dockerd[1336]: time="2024-09-17T08:51:15.715058693Z" level=info msg="ignoring event" container=ef9a28f1d568c6e8e2b29ccf81df016e1930071eb0423a09c7099c8989b59077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:16 addons-118348 dockerd[1336]: time="2024-09-17T08:51:16.404089885Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=34ebf1568823a9abdb506aad920db2aa7508f251a8feebd84bcef40f9f0a176c
	Sep 17 08:51:16 addons-118348 dockerd[1336]: time="2024-09-17T08:51:16.425897851Z" level=info msg="ignoring event" container=34ebf1568823a9abdb506aad920db2aa7508f251a8feebd84bcef40f9f0a176c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:16 addons-118348 dockerd[1336]: time="2024-09-17T08:51:16.533659046Z" level=info msg="ignoring event" container=17107eab7e2c053d5f508267e4216e7b2f2748ed46998fc843aa6ce26e9e6cc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:16 addons-118348 dockerd[1336]: time="2024-09-17T08:51:16.586035038Z" level=info msg="ignoring event" container=383440bc3e922d5393564a2b25f52b2ee4c79a29d7fee1ffd176edbd3e181e8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:16 addons-118348 dockerd[1336]: time="2024-09-17T08:51:16.694998563Z" level=info msg="ignoring event" container=737e486e424f2851ff8113b42d56ee69ac9a3096a57d1f86af8ac8490993a4ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:17 addons-118348 cri-dockerd[1601]: time="2024-09-17T08:51:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e79906ca769bfe62340e57232ca756180eaf5fd86fec639addce8332a05a29b/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 08:51:18 addons-118348 dockerd[1336]: time="2024-09-17T08:51:18.048764263Z" level=warning msg="reference for unknown type: " digest="sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c" remote="ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c"
	Sep 17 08:51:20 addons-118348 cri-dockerd[1601]: time="2024-09-17T08:51:20Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.1@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c"
	Sep 17 08:51:20 addons-118348 dockerd[1336]: time="2024-09-17T08:51:20.262319605Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 08:51:20 addons-118348 dockerd[1336]: time="2024-09-17T08:51:20.360786649Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 08:51:37 addons-118348 dockerd[1336]: time="2024-09-17T08:51:37.705734164Z" level=info msg="ignoring event" container=af3707355027c5e154a2c2cbc6a1009f448196a1e252c3b0617fe33f494b9547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:38 addons-118348 dockerd[1336]: time="2024-09-17T08:51:38.180066631Z" level=info msg="ignoring event" container=2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:38 addons-118348 dockerd[1336]: time="2024-09-17T08:51:38.244368670Z" level=info msg="ignoring event" container=59629b206e29a2b335b54b2083abd3fdb2030d9f8f50c8e75301fce625a96c34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:38 addons-118348 dockerd[1336]: time="2024-09-17T08:51:38.310873870Z" level=info msg="ignoring event" container=ce7007c1fbe071bf8be031cab9076d8b43d8ae3b6fb04886de006ba667f7a4e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:38 addons-118348 cri-dockerd[1601]: time="2024-09-17T08:51:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-z9ss9_kube-system\": unexpected command output nsenter: cannot open /proc/4131/ns/net: No such file or directory\n with error: exit status 1"
	Sep 17 08:51:38 addons-118348 dockerd[1336]: time="2024-09-17T08:51:38.395824559Z" level=info msg="ignoring event" container=4f57112a8c3563bd5083401d9c67511eb4e68785c64dc9aeb47adef7c5aec4af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4b522f18d071e       ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c                        19 seconds ago      Running             headlamp                  0                   7e79906ca769b       headlamp-7b5c95b59d-7tpl7
	d35cd61b18ff7       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  28 seconds ago      Running             hello-world-app           0                   0a4f4694eba34       hello-world-app-55bf9c44b4-m6vxk
	4f1f9e1fad01f       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                38 seconds ago      Running             nginx                     0                   52e40a84304f3       nginx
	b1ba8e0c3a913       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   961018bc6d86e       gcp-auth-89d5ffd79-5cd98
	fde3cb4cbfcda       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   941c928e6fdd0       ingress-nginx-admission-patch-zmqg2
	175f41e84f172       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   0cdd60007cdb1       ingress-nginx-admission-create-qdddq
	59629b206e29a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   4f57112a8c356       registry-proxy-z9ss9
	d9dc593c31cfe       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   5aef938fa147f       storage-provisioner
	b282494940988       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   1571a212d6e0e       coredns-7c65d6cfc9-25csd
	bbf2f71545987       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   0ca66b293eb23       kube-proxy-kbbwc
	632fef3907db7       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   b8c068cbe92d4       kube-controller-manager-addons-118348
	3b86720cba6d5       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   44b0deecee93b       kube-apiserver-addons-118348
	a87794d0bfcfc       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   eb92559371368       kube-scheduler-addons-118348
	f68d38443e29f       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   dbdce7e538d98       etcd-addons-118348
	
	
	==> coredns [b28249494098] <==
	[INFO] 10.244.0.22:33250 - 31260 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003903857s
	[INFO] 10.244.0.22:34780 - 39607 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003875029s
	[INFO] 10.244.0.22:44235 - 33312 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078672s
	[INFO] 10.244.0.22:33250 - 51374 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061981s
	[INFO] 10.244.0.22:34780 - 21876 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044802s
	[INFO] 10.244.0.22:58125 - 47928 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00211762s
	[INFO] 10.244.0.22:45801 - 16343 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007450403s
	[INFO] 10.244.0.22:43230 - 5612 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002275494s
	[INFO] 10.244.0.22:56104 - 39511 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002220231s
	[INFO] 10.244.0.22:43861 - 33385 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002209809s
	[INFO] 10.244.0.22:45801 - 19158 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005381891s
	[INFO] 10.244.0.22:58125 - 34386 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005537849s
	[INFO] 10.244.0.22:43861 - 1243 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005358587s
	[INFO] 10.244.0.22:56104 - 49318 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005503151s
	[INFO] 10.244.0.22:43230 - 17559 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005489859s
	[INFO] 10.244.0.22:56104 - 60953 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004506226s
	[INFO] 10.244.0.22:45801 - 22069 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00484831s
	[INFO] 10.244.0.22:58125 - 49347 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00251105s
	[INFO] 10.244.0.22:56104 - 16363 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047949s
	[INFO] 10.244.0.22:43230 - 29112 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002655104s
	[INFO] 10.244.0.22:43861 - 38546 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004782989s
	[INFO] 10.244.0.22:43861 - 46633 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075892s
	[INFO] 10.244.0.22:58125 - 52218 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073019s
	[INFO] 10.244.0.22:45801 - 20959 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091201s
	[INFO] 10.244.0.22:43230 - 53652 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088634s
	
	
	==> describe nodes <==
	Name:               addons-118348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-118348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-118348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_38_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-118348
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-118348
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:51:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:51:29 +0000   Tue, 17 Sep 2024 08:38:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:51:29 +0000   Tue, 17 Sep 2024 08:38:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:51:29 +0000   Tue, 17 Sep 2024 08:38:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:51:29 +0000   Tue, 17 Sep 2024 08:38:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-118348
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 014fe64c43e8487e9a13747dfcac2bbe
	  System UUID:                6e85f00e-e142-4af7-9baf-424820c40175
	  Boot ID:                    56c7860f-74df-456c-8d25-e851e670c43e
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-m6vxk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  gcp-auth                    gcp-auth-89d5ffd79-5cd98                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-7b5c95b59d-7tpl7                0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 coredns-7c65d6cfc9-25csd                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-118348                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-118348             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-118348    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kbbwc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-118348             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x7 over 12m)  kubelet          Node addons-118348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x6 over 12m)  kubelet          Node addons-118348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x6 over 12m)  kubelet          Node addons-118348 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-118348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-118348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-118348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-118348 event: Registered Node addons-118348 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 37 71 78 45 4a 08 06
	[  +1.313513] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a c5 48 0e 64 c7 08 06
	[  +5.940819] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 57 66 18 23 35 08 06
	[  +0.240878] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 f2 ed 40 5d 18 08 06
	[  +0.059493] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 25 f0 cc 1d 5d 08 06
	[Sep17 08:40] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 62 9c 3d 95 97 08 06
	[Sep17 08:41] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 ec f4 5f 7b e9 08 06
	[  +0.074008] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 e2 37 f0 e8 fc 08 06
	[ +24.575841] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 2c 62 9b b4 56 08 06
	[  +0.000483] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 64 4b 88 7b 14 08 06
	[Sep17 08:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 66 c8 65 f7 ef 08 06
	[Sep17 08:51] IPv4: martian source 10.244.0.36 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 62 9c 3d 95 97 08 06
	[  +0.674983] IPv4: martian source 10.244.0.22 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 64 4b 88 7b 14 08 06
	
	
	==> etcd [f68d38443e29] <==
	{"level":"info","ts":"2024-09-17T08:38:49.298199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T08:38:49.298217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-17T08:38:49.298229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T08:38:49.299275Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-118348 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T08:38:49.299275Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:38:49.299281Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:49.299305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:38:49.299577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T08:38:49.299598Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T08:38:49.299976Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:49.300085Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:49.300105Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:49.300392Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:38:49.300476Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:38:49.302401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T08:38:49.302795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-17T08:39:18.237066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.49774ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031952476417337 > lease_revoke:<id:70cc91ff2261a8bc>","response":"size:29"}
	{"level":"warn","ts":"2024-09-17T08:39:38.299554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.002915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031952476417815 > lease_revoke:<id:70cc91ff2261ad8f>","response":"size:29"}
	{"level":"info","ts":"2024-09-17T08:39:38.299645Z","caller":"traceutil/trace.go:171","msg":"trace[918659701] linearizableReadLoop","detail":"{readStateIndex:1110; appliedIndex:1109; }","duration":"106.639758ms","start":"2024-09-17T08:39:38.192993Z","end":"2024-09-17T08:39:38.299632Z","steps":["trace[918659701] 'read index received'  (duration: 24.71µs)","trace[918659701] 'applied index is now lower than readState.Index'  (duration: 106.61398ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T08:39:38.299760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.733359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:38.299797Z","caller":"traceutil/trace.go:171","msg":"trace[785720166] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1089; }","duration":"106.799404ms","start":"2024-09-17T08:39:38.192982Z","end":"2024-09-17T08:39:38.299782Z","steps":["trace[785720166] 'agreement among raft nodes before linearized reading'  (duration: 106.712141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:49.916136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1906}
	{"level":"info","ts":"2024-09-17T08:48:49.940539Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1906,"took":"23.890243ms","hash":3063729370,"current-db-size-bytes":9035776,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4993024,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-17T08:48:49.940580Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3063729370,"revision":1906,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T08:51:20.531262Z","caller":"traceutil/trace.go:171","msg":"trace[1068536447] transaction","detail":"{read_only:false; response_revision:3101; number_of_response:1; }","duration":"110.306434ms","start":"2024-09-17T08:51:20.420936Z","end":"2024-09-17T08:51:20.531242Z","steps":["trace[1068536447] 'process raft request'  (duration: 54.119236ms)","trace[1068536447] 'compare'  (duration: 56.106033ms)"],"step_count":2}
	
	
	==> gcp-auth [b1ba8e0c3a91] <==
	2024/09/17 08:42:24 Ready to write response ...
	2024/09/17 08:50:35 Ready to marshal response ...
	2024/09/17 08:50:35 Ready to write response ...
	2024/09/17 08:50:37 Ready to marshal response ...
	2024/09/17 08:50:37 Ready to write response ...
	2024/09/17 08:50:37 Ready to marshal response ...
	2024/09/17 08:50:37 Ready to write response ...
	2024/09/17 08:50:37 Ready to marshal response ...
	2024/09/17 08:50:37 Ready to write response ...
	2024/09/17 08:50:38 Ready to marshal response ...
	2024/09/17 08:50:38 Ready to write response ...
	2024/09/17 08:50:45 Ready to marshal response ...
	2024/09/17 08:50:45 Ready to write response ...
	2024/09/17 08:50:55 Ready to marshal response ...
	2024/09/17 08:50:55 Ready to write response ...
	2024/09/17 08:51:00 Ready to marshal response ...
	2024/09/17 08:51:00 Ready to write response ...
	2024/09/17 08:51:09 Ready to marshal response ...
	2024/09/17 08:51:09 Ready to write response ...
	2024/09/17 08:51:17 Ready to marshal response ...
	2024/09/17 08:51:17 Ready to write response ...
	2024/09/17 08:51:17 Ready to marshal response ...
	2024/09/17 08:51:17 Ready to write response ...
	2024/09/17 08:51:17 Ready to marshal response ...
	2024/09/17 08:51:17 Ready to write response ...
	
	
	==> kernel <==
	 08:51:39 up 34 min,  0 users,  load average: 0.76, 0.67, 0.68
	Linux addons-118348 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [3b86720cba6d] <==
	W0917 08:42:16.297723       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 08:42:16.398666       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 08:42:16.807001       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 08:50:31.855096       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 08:50:32.868534       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 08:50:33.218096       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0917 08:50:43.797133       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:51:00.184194       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 08:51:00.387876       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.47.223"}
	E0917 08:51:01.782150       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 08:51:09.866188       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.181.132"}
	I0917 08:51:10.513082       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:10.513130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:10.584958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:10.585002       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:10.588373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:10.588416       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:10.602760       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:10.602804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:10.779402       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:10.779456       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 08:51:11.585787       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 08:51:11.780434       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 08:51:11.790445       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0917 08:51:17.204148       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.251.186"}
	
	
	==> kube-controller-manager [632fef3907db] <==
	W0917 08:51:26.280028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:26.280073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:28.128424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:28.128460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:28.746043       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:51:28.746084       1 shared_informer.go:320] Caches are synced for resource quota
	W0917 08:51:28.779384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:28.779428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:28.977484       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:51:28.977526       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 08:51:29.312708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-118348"
	W0917 08:51:31.932154       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:31.932193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:33.221823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:33.221861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:33.780766       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0917 08:51:34.312702       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:34.312740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:36.655037       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:36.655074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:36.921510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:36.921556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:37.569250       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:37.569291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:38.134158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.075µs"
	
	
	==> kube-proxy [bbf2f7154598] <==
	I0917 08:39:02.680177       1 server_linux.go:66] "Using iptables proxy"
	I0917 08:39:03.279821       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 08:39:03.279899       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:39:03.477044       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 08:39:03.477105       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:39:03.482215       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:39:03.482630       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:39:03.482657       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:39:03.578163       1 config.go:199] "Starting service config controller"
	I0917 08:39:03.578193       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:39:03.578217       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:39:03.578221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:39:03.578694       1 config.go:328] "Starting node config controller"
	I0917 08:39:03.578703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:39:03.679344       1 shared_informer.go:320] Caches are synced for node config
	I0917 08:39:03.679380       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:39:03.679409       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a87794d0bfcf] <==
	W0917 08:38:51.482118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 08:38:51.482134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:51.482149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:51.482169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:51.482201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:51.482217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:51.482245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:51.482269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:51.482336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:51.482360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:51.482553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 08:38:51.482579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.293377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:52.293413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.377731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:52.377771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.489375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:52.489412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.527687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:52.527732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.551119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:52.551166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:52.564769       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 08:38:52.564813       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0917 08:38:54.678654       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:51:17 addons-118348 kubelet[2448]: I0917 08:51:17.806918    2448 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49fa658-74f7-46a0-9b08-17722aeffa19" path="/var/lib/kubelet/pods/c49fa658-74f7-46a0-9b08-17722aeffa19/volumes"
	Sep 17 08:51:20 addons-118348 kubelet[2448]: E0917 08:51:20.361283    2448 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 17 08:51:20 addons-118348 kubelet[2448]: E0917 08:51:20.361468    2448 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnhdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,Su
bPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(1fb4a1fe-7975-4387-bbb2-4911ca88db0b): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 17 08:51:20 addons-118348 kubelet[2448]: E0917 08:51:20.362658    2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="1fb4a1fe-7975-4387-bbb2-4911ca88db0b"
	Sep 17 08:51:20 addons-118348 kubelet[2448]: E0917 08:51:20.799882    2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b26b0151-c2f7-46cb-a21c-97d6bc6db827"
	Sep 17 08:51:21 addons-118348 kubelet[2448]: I0917 08:51:21.215714    2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-7b5c95b59d-7tpl7" podStartSLOduration=2.023894749 podStartE2EDuration="4.215695131s" podCreationTimestamp="2024-09-17 08:51:17 +0000 UTC" firstStartedPulling="2024-09-17 08:51:18.019028834 +0000 UTC m=+744.427559713" lastFinishedPulling="2024-09-17 08:51:20.210829227 +0000 UTC m=+746.619360095" observedRunningTime="2024-09-17 08:51:21.215565487 +0000 UTC m=+747.624096393" watchObservedRunningTime="2024-09-17 08:51:21.215695131 +0000 UTC m=+747.624226013"
	Sep 17 08:51:31 addons-118348 kubelet[2448]: E0917 08:51:31.800507    2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b26b0151-c2f7-46cb-a21c-97d6bc6db827"
	Sep 17 08:51:34 addons-118348 kubelet[2448]: E0917 08:51:34.800580    2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="1fb4a1fe-7975-4387-bbb2-4911ca88db0b"
	Sep 17 08:51:35 addons-118348 kubelet[2448]: I0917 08:51:35.799348    2448 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z9ss9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.891514    2448 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnhdj\" (UniqueName: \"kubernetes.io/projected/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-kube-api-access-mnhdj\") pod \"1fb4a1fe-7975-4387-bbb2-4911ca88db0b\" (UID: \"1fb4a1fe-7975-4387-bbb2-4911ca88db0b\") "
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.891584    2448 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-gcp-creds\") pod \"1fb4a1fe-7975-4387-bbb2-4911ca88db0b\" (UID: \"1fb4a1fe-7975-4387-bbb2-4911ca88db0b\") "
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.891670    2448 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1fb4a1fe-7975-4387-bbb2-4911ca88db0b" (UID: "1fb4a1fe-7975-4387-bbb2-4911ca88db0b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.893615    2448 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-kube-api-access-mnhdj" (OuterVolumeSpecName: "kube-api-access-mnhdj") pod "1fb4a1fe-7975-4387-bbb2-4911ca88db0b" (UID: "1fb4a1fe-7975-4387-bbb2-4911ca88db0b"). InnerVolumeSpecName "kube-api-access-mnhdj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.991819    2448 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-gcp-creds\") on node \"addons-118348\" DevicePath \"\""
	Sep 17 08:51:37 addons-118348 kubelet[2448]: I0917 08:51:37.991852    2448 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mnhdj\" (UniqueName: \"kubernetes.io/projected/1fb4a1fe-7975-4387-bbb2-4911ca88db0b-kube-api-access-mnhdj\") on node \"addons-118348\" DevicePath \"\""
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.397153    2448 scope.go:117] "RemoveContainer" containerID="2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142"
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.414580    2448 scope.go:117] "RemoveContainer" containerID="2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142"
	Sep 17 08:51:38 addons-118348 kubelet[2448]: E0917 08:51:38.415407    2448 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142" containerID="2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142"
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.415448    2448 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142"} err="failed to get container status \"2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2142006050fe3d6472faea03ddf0ca8a2c6684055d4a029a2fe32467d6e90142"
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.495838    2448 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtpqj\" (UniqueName: \"kubernetes.io/projected/2f41b6f7-f293-467f-8215-b24af50ec8ba-kube-api-access-gtpqj\") pod \"2f41b6f7-f293-467f-8215-b24af50ec8ba\" (UID: \"2f41b6f7-f293-467f-8215-b24af50ec8ba\") "
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.495898    2448 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzmhw\" (UniqueName: \"kubernetes.io/projected/29edb9a3-341b-486a-8045-5546e8911d8c-kube-api-access-bzmhw\") pod \"29edb9a3-341b-486a-8045-5546e8911d8c\" (UID: \"29edb9a3-341b-486a-8045-5546e8911d8c\") "
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.497882    2448 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f41b6f7-f293-467f-8215-b24af50ec8ba-kube-api-access-gtpqj" (OuterVolumeSpecName: "kube-api-access-gtpqj") pod "2f41b6f7-f293-467f-8215-b24af50ec8ba" (UID: "2f41b6f7-f293-467f-8215-b24af50ec8ba"). InnerVolumeSpecName "kube-api-access-gtpqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.497929    2448 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29edb9a3-341b-486a-8045-5546e8911d8c-kube-api-access-bzmhw" (OuterVolumeSpecName: "kube-api-access-bzmhw") pod "29edb9a3-341b-486a-8045-5546e8911d8c" (UID: "29edb9a3-341b-486a-8045-5546e8911d8c"). InnerVolumeSpecName "kube-api-access-bzmhw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.596273    2448 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gtpqj\" (UniqueName: \"kubernetes.io/projected/2f41b6f7-f293-467f-8215-b24af50ec8ba-kube-api-access-gtpqj\") on node \"addons-118348\" DevicePath \"\""
	Sep 17 08:51:38 addons-118348 kubelet[2448]: I0917 08:51:38.596306    2448 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bzmhw\" (UniqueName: \"kubernetes.io/projected/29edb9a3-341b-486a-8045-5546e8911d8c-kube-api-access-bzmhw\") on node \"addons-118348\" DevicePath \"\""
	
	
	==> storage-provisioner [d9dc593c31cf] <==
	I0917 08:39:06.877802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:06.894568       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:06.894620       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:06.986255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:06.986504       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-118348_561973a3-012f-4d8f-a372-39a4ab88c714!
	I0917 08:39:06.987752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"436ede53-6305-498f-8f3a-584f57c88cbf", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-118348_561973a3-012f-4d8f-a372-39a4ab88c714 became leader
	I0917 08:39:07.092767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-118348_561973a3-012f-4d8f-a372-39a4ab88c714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-118348 -n addons-118348
helpers_test.go:261: (dbg) Run:  kubectl --context addons-118348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-118348 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-118348 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-118348/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:42:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j84qr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j84qr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m15s                   default-scheduler  Successfully assigned default/busybox to addons-118348
	  Normal   Pulling    7m37s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m37s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m37s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.40s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.71
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.75
22 TestOffline 79.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.3
29 TestAddons/serial/Volcano 39.52
31 TestAddons/serial/GCPAuth/Namespaces 0.13
34 TestAddons/parallel/Ingress 19.69
35 TestAddons/parallel/InspektorGadget 10.59
36 TestAddons/parallel/MetricsServer 6.56
37 TestAddons/parallel/HelmTiller 8.99
39 TestAddons/parallel/CSI 44.66
40 TestAddons/parallel/Headlamp 10.9
41 TestAddons/parallel/CloudSpanner 5.41
42 TestAddons/parallel/LocalPath 51.77
43 TestAddons/parallel/NvidiaDevicePlugin 6.38
44 TestAddons/parallel/Yakd 11.53
45 TestAddons/StoppedEnableDisable 5.86
46 TestCertOptions 28.47
47 TestCertExpiration 227.34
48 TestDockerFlags 27.27
49 TestForceSystemdFlag 30.17
50 TestForceSystemdEnv 27.56
52 TestKVMDriverInstallOrUpdate 1.14
56 TestErrorSpam/setup 23.48
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.11
60 TestErrorSpam/unpause 1.27
61 TestErrorSpam/stop 10.79
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 62.93
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.14
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.14
73 TestFunctional/serial/CacheCmd/cache/add_local 0.65
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.21
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 40.53
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.91
84 TestFunctional/serial/LogsFileCmd 0.92
85 TestFunctional/serial/InvalidService 4.47
87 TestFunctional/parallel/ConfigCmd 0.37
88 TestFunctional/parallel/DashboardCmd 10.62
89 TestFunctional/parallel/DryRun 0.39
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.09
95 TestFunctional/parallel/ServiceCmdConnect 7.77
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 31.48
99 TestFunctional/parallel/SSHCmd 0.5
100 TestFunctional/parallel/CpCmd 1.78
101 TestFunctional/parallel/MySQL 25.29
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.72
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
114 TestFunctional/parallel/ProfileCmd/profile_list 0.36
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.61
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
123 TestFunctional/parallel/ImageCommands/Setup 0.49
124 TestFunctional/parallel/MountCmd/any-port 11.96
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
132 TestFunctional/parallel/DockerEnv/bash 0.89
133 TestFunctional/parallel/ServiceCmd/List 0.5
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
139 TestFunctional/parallel/ServiceCmd/Format 0.32
140 TestFunctional/parallel/ServiceCmd/URL 0.35
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.2
146 TestFunctional/parallel/MountCmd/specific-port 1.89
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 93.25
161 TestMultiControlPlane/serial/DeployApp 4.42
162 TestMultiControlPlane/serial/PingHostFromPods 0.99
163 TestMultiControlPlane/serial/AddWorkerNode 20.09
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.62
166 TestMultiControlPlane/serial/CopyFile 15.26
167 TestMultiControlPlane/serial/StopSecondaryNode 11.26
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
169 TestMultiControlPlane/serial/RestartSecondaryNode 22.11
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.88
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 225.7
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.09
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
174 TestMultiControlPlane/serial/StopCluster 32.34
175 TestMultiControlPlane/serial/RestartCluster 80.09
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
177 TestMultiControlPlane/serial/AddSecondaryNode 31.49
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.6
181 TestImageBuild/serial/Setup 20.48
182 TestImageBuild/serial/NormalBuild 1.11
183 TestImageBuild/serial/BuildWithBuildArg 0.71
184 TestImageBuild/serial/BuildWithDockerIgnore 0.52
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.55
189 TestJSONOutput/start/Command 33.18
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.52
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.41
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.71
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.18
214 TestKicCustomNetwork/create_custom_network 25.54
215 TestKicCustomNetwork/use_default_bridge_network 25.28
216 TestKicExistingNetwork 21.9
217 TestKicCustomSubnet 22.56
218 TestKicStaticIP 22.38
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 47
223 TestMountStart/serial/StartWithMountFirst 9.21
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 9.17
226 TestMountStart/serial/VerifyMountSecond 0.23
227 TestMountStart/serial/DeleteFirst 1.43
228 TestMountStart/serial/VerifyMountPostDelete 0.22
229 TestMountStart/serial/Stop 1.16
230 TestMountStart/serial/RestartStopped 7.71
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 58.29
235 TestMultiNode/serial/DeployApp2Nodes 51.74
236 TestMultiNode/serial/PingHostFrom2Pods 0.66
237 TestMultiNode/serial/AddNode 16.05
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.27
240 TestMultiNode/serial/CopyFile 8.55
241 TestMultiNode/serial/StopNode 2.03
242 TestMultiNode/serial/StartAfterStop 9.77
243 TestMultiNode/serial/RestartKeepsNodes 101.05
244 TestMultiNode/serial/DeleteNode 5.06
245 TestMultiNode/serial/StopMultiNode 21.29
246 TestMultiNode/serial/RestartMultiNode 50.99
247 TestMultiNode/serial/ValidateNameConflict 26.07
252 TestPreload 119.34
254 TestScheduledStopUnix 93.91
255 TestSkaffold 94.23
257 TestInsufficientStorage 9.49
258 TestRunningBinaryUpgrade 65.04
260 TestKubernetesUpgrade 344.5
261 TestMissingContainerUpgrade 136.93
262 TestStoppedBinaryUpgrade/Setup 0.44
263 TestStoppedBinaryUpgrade/Upgrade 120.68
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
273 TestPause/serial/Start 40.36
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
276 TestNoKubernetes/serial/StartWithK8s 26.23
277 TestNoKubernetes/serial/StartWithStopK8s 17.24
278 TestPause/serial/SecondStartNoReconfiguration 35.63
279 TestNoKubernetes/serial/Start 8.71
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
281 TestNoKubernetes/serial/ProfileList 16
282 TestPause/serial/Pause 0.48
283 TestPause/serial/VerifyStatus 0.3
284 TestPause/serial/Unpause 0.44
285 TestPause/serial/PauseAgain 0.69
286 TestNoKubernetes/serial/Stop 1.26
287 TestPause/serial/DeletePaused 2.12
288 TestNoKubernetes/serial/StartNoArgs 7.67
289 TestPause/serial/VerifyDeletedResources 0.61
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
303 TestStartStop/group/old-k8s-version/serial/FirstStart 128.86
305 TestStartStop/group/no-preload/serial/FirstStart 69.32
306 TestStartStop/group/no-preload/serial/DeployApp 10.27
308 TestStartStop/group/embed-certs/serial/FirstStart 64.32
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
310 TestStartStop/group/no-preload/serial/Stop 10.77
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 262.74
313 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
314 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
315 TestStartStop/group/old-k8s-version/serial/Stop 10.79
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
317 TestStartStop/group/old-k8s-version/serial/SecondStart 23.3
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 36.96
320 TestStartStop/group/embed-certs/serial/DeployApp 8.38
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
322 TestStartStop/group/embed-certs/serial/Stop 10.79
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 29
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/embed-certs/serial/SecondStart 271.11
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.81
329 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
330 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
331 TestStartStop/group/old-k8s-version/serial/Pause 2.33
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 264.78
335 TestStartStop/group/newest-cni/serial/FirstStart 31.01
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
338 TestStartStop/group/newest-cni/serial/Stop 10.09
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 13.99
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
344 TestStartStop/group/newest-cni/serial/Pause 2.48
345 TestNetworkPlugins/group/auto/Start 64.92
346 TestNetworkPlugins/group/auto/KubeletFlags 0.26
347 TestNetworkPlugins/group/auto/NetCatPod 10.2
348 TestNetworkPlugins/group/auto/DNS 0.13
349 TestNetworkPlugins/group/auto/Localhost 0.11
350 TestNetworkPlugins/group/auto/HairPin 0.11
351 TestNetworkPlugins/group/kindnet/Start 57.8
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
355 TestStartStop/group/no-preload/serial/Pause 2.36
356 TestNetworkPlugins/group/calico/Start 58.62
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
360 TestNetworkPlugins/group/kindnet/DNS 0.16
361 TestNetworkPlugins/group/kindnet/Localhost 0.16
362 TestNetworkPlugins/group/kindnet/HairPin 0.14
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.26
366 TestNetworkPlugins/group/calico/NetCatPod 8.2
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
368 TestNetworkPlugins/group/custom-flannel/Start 51.51
369 TestNetworkPlugins/group/calico/DNS 0.14
370 TestNetworkPlugins/group/calico/Localhost 0.12
371 TestNetworkPlugins/group/calico/HairPin 0.13
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/embed-certs/serial/Pause 3.14
374 TestNetworkPlugins/group/false/Start 63.85
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
377 TestNetworkPlugins/group/enable-default-cni/Start 65.39
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.31
380 TestNetworkPlugins/group/flannel/Start 43.36
381 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
382 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
383 TestNetworkPlugins/group/custom-flannel/DNS 0.14
384 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
385 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
386 TestNetworkPlugins/group/false/KubeletFlags 0.3
387 TestNetworkPlugins/group/false/NetCatPod 10.19
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/bridge/Start 69.23
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
391 TestNetworkPlugins/group/flannel/NetCatPod 10.2
392 TestNetworkPlugins/group/false/DNS 0.15
393 TestNetworkPlugins/group/false/Localhost 0.13
394 TestNetworkPlugins/group/false/HairPin 0.13
395 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
396 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.21
397 TestNetworkPlugins/group/flannel/DNS 0.25
398 TestNetworkPlugins/group/flannel/Localhost 0.13
399 TestNetworkPlugins/group/flannel/HairPin 0.11
400 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
401 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
402 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
403 TestNetworkPlugins/group/kubenet/Start 66.31
404 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
405 TestNetworkPlugins/group/bridge/NetCatPod 9.16
406 TestNetworkPlugins/group/bridge/DNS 0.12
407 TestNetworkPlugins/group/bridge/Localhost 0.1
408 TestNetworkPlugins/group/bridge/HairPin 0.1
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
410 TestNetworkPlugins/group/kubenet/NetCatPod 10.18
411 TestNetworkPlugins/group/kubenet/DNS 0.12
412 TestNetworkPlugins/group/kubenet/Localhost 0.1
413 TestNetworkPlugins/group/kubenet/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-982637 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-982637 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.011726321s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-982637
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-982637: exit status 85 (56.469999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-982637 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |          |
	|         | -p download-only-982637        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:00
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:00.812273   14852 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:00.812395   14852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:00.812403   14852 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:00.812408   14852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:00.812612   14852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	W0917 08:38:00.812778   14852 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19648-8091/.minikube/config/config.json: open /home/jenkins/minikube-integration/19648-8091/.minikube/config/config.json: no such file or directory
	I0917 08:38:00.813370   14852 out.go:352] Setting JSON to true
	I0917 08:38:00.814325   14852 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1232,"bootTime":1726561049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:00.814429   14852 start.go:139] virtualization: kvm guest
	I0917 08:38:00.816877   14852 out.go:97] [download-only-982637] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 08:38:00.817010   14852 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19648-8091/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 08:38:00.817101   14852 notify.go:220] Checking for updates...
	I0917 08:38:00.818549   14852 out.go:169] MINIKUBE_LOCATION=19648
	I0917 08:38:00.819892   14852 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:00.821137   14852 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:38:00.822404   14852 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	I0917 08:38:00.823738   14852 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 08:38:00.825943   14852 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 08:38:00.826192   14852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:00.847975   14852 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:00.848049   14852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:01.202956   14852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:38:01.193703985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:01.203060   14852 docker.go:318] overlay module found
	I0917 08:38:01.204884   14852 out.go:97] Using the docker driver based on user configuration
	I0917 08:38:01.204910   14852 start.go:297] selected driver: docker
	I0917 08:38:01.204916   14852 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:01.204998   14852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:01.254767   14852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:38:01.245574639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:01.254910   14852 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:01.255397   14852 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 08:38:01.255543   14852 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 08:38:01.257433   14852 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-982637 host does not exist
	  To start a cluster, run: "minikube start -p download-only-982637"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-982637
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-601049 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-601049 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.706064256s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-601049
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-601049: exit status 85 (57.189581ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-982637 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-982637        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-982637        | download-only-982637 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only        | download-only-601049 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-601049        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:06
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:06.209976   15198 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:06.210230   15198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:06.210239   15198 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:06.210243   15198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:06.210456   15198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 08:38:06.211055   15198 out.go:352] Setting JSON to true
	I0917 08:38:06.211901   15198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1237,"bootTime":1726561049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:06.212000   15198 start.go:139] virtualization: kvm guest
	I0917 08:38:06.214089   15198 out.go:97] [download-only-601049] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:06.214216   15198 notify.go:220] Checking for updates...
	I0917 08:38:06.215454   15198 out.go:169] MINIKUBE_LOCATION=19648
	I0917 08:38:06.216615   15198 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:06.218014   15198 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:38:06.219312   15198 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	I0917 08:38:06.220418   15198 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 08:38:06.222491   15198 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 08:38:06.222749   15198 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:06.245548   15198 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:06.245653   15198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:06.289842   15198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-17 08:38:06.281248091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:06.289938   15198 docker.go:318] overlay module found
	I0917 08:38:06.291671   15198 out.go:97] Using the docker driver based on user configuration
	I0917 08:38:06.291701   15198 start.go:297] selected driver: docker
	I0917 08:38:06.291706   15198 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:06.291782   15198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:06.335461   15198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-17 08:38:06.327063048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:06.335622   15198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:06.336083   15198 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 08:38:06.336207   15198 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 08:38:06.338040   15198 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-601049 host does not exist
	  To start a cluster, run: "minikube start -p download-only-601049"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-601049
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-450334 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-450334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-450334
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-437165 --alsologtostderr --binary-mirror http://127.0.0.1:39083 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-437165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-437165
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (79.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-036869 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-036869 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m17.883665348s)
helpers_test.go:175: Cleaning up "offline-docker-036869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-036869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-036869: (2.076180104s)
--- PASS: TestOffline (79.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-118348
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-118348: exit status 85 (48.094116ms)

                                                
                                                
-- stdout --
	* Profile "addons-118348" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118348"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-118348
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-118348: exit status 85 (48.586551ms)

                                                
                                                
-- stdout --
	* Profile "addons-118348" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118348"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-118348 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-118348 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.298098989s)
--- PASS: TestAddons/Setup (212.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.52s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.477523ms
addons_test.go:905: volcano-admission stabilized in 11.51402ms
addons_test.go:897: volcano-scheduler stabilized in 11.538626ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-wxsvv" [8e12b9bf-1572-46fa-9949-d6c4f2e087d3] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003569523s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lkphk" [9f4d9bc1-06dc-4316-b5f9-225aafd501e9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003411257s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-5mswf" [489728d7-6b1a-4b0d-8032-05f09ed88362] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003389683s
addons_test.go:932: (dbg) Run:  kubectl --context addons-118348 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-118348 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-118348 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [242028c1-b455-4c80-89f7-89520a3d2b16] Pending
helpers_test.go:344: "test-job-nginx-0" [242028c1-b455-4c80-89f7-89520a3d2b16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [242028c1-b455-4c80-89f7-89520a3d2b16] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003824421s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable volcano --alsologtostderr -v=1: (10.18461304s)
--- PASS: TestAddons/serial/Volcano (39.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-118348 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-118348 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-118348 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-118348 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-118348 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1d7142f6-0144-45e6-a516-f036be4b72e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1d7142f6-0144-45e6-a516-f036be4b72e6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002638109s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-118348 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable ingress-dns --alsologtostderr -v=1: (1.98491867s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable ingress --alsologtostderr -v=1: (7.552253067s)
--- PASS: TestAddons/parallel/Ingress (19.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nzs5l" [170c0116-9e01-46c6-a67b-4a304010a3f7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004627583s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118348
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118348: (5.583067025s)
--- PASS: TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.37183ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9dxps" [3646ec2c-2273-4bf7-af3e-a3dfe0d91552] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00293057s
addons_test.go:417: (dbg) Run:  kubectl --context addons-118348 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.99s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.996374ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-ng9ss" [cad55614-1ecf-4037-9bff-258c6c00984a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004141731s
addons_test.go:475: (dbg) Run:  kubectl --context addons-118348 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-118348 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.553757823s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.479178ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-118348 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-118348 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [390b4b5f-05f2-49cd-ae88-b9fc85ffcc6e] Pending
helpers_test.go:344: "task-pv-pod" [390b4b5f-05f2-49cd-ae88-b9fc85ffcc6e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [390b4b5f-05f2-49cd-ae88-b9fc85ffcc6e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003950867s
addons_test.go:590: (dbg) Run:  kubectl --context addons-118348 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118348 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118348 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-118348 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-118348 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-118348 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-118348 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a6188133-b74b-4784-a2d6-d00bc83e7bf9] Pending
helpers_test.go:344: "task-pv-pod-restore" [a6188133-b74b-4784-a2d6-d00bc83e7bf9] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003659615s
addons_test.go:632: (dbg) Run:  kubectl --context addons-118348 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-118348 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-118348 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.728821376s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable volumesnapshots --alsologtostderr -v=1: (1.224123435s)
--- PASS: TestAddons/parallel/CSI (44.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-118348 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-7tpl7" [75f2d000-7dd4-4778-8371-26f5e205b007] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-7tpl7" [75f2d000-7dd4-4778-8371-26f5e205b007] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003113064s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (10.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-rlwjs" [c49fa658-74f7-46a0-9b08-17722aeffa19] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003689642s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-118348
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-118348 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-118348 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5d0e69a2-f0f7-45df-adc8-d4a2d4fe1d34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5d0e69a2-f0f7-45df-adc8-d4a2d4fe1d34] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5d0e69a2-f0f7-45df-adc8-d4a2d4fe1d34] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003630685s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-118348 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 ssh "cat /opt/local-path-provisioner/pvc-55d397ea-86e9-4f5a-ae73-814393eaf4d2_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-118348 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-118348 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.962821892s)
--- PASS: TestAddons/parallel/LocalPath (51.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sghds" [1dd15af2-9e1e-4296-99f7-992a66fc0483] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003033381s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-118348
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jgrtm" [d38c7d89-f2d9-418f-a1a6-4211bbc6c451] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003534001s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-118348 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-118348 addons disable yakd --alsologtostderr -v=1: (5.523615668s)
--- PASS: TestAddons/parallel/Yakd (11.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.86s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-118348
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-118348: (5.64120231s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-118348
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-118348
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-118348
--- PASS: TestAddons/StoppedEnableDisable (5.86s)

                                                
                                    
x
+
TestCertOptions (28.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-333417 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-333417 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.757911652s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-333417 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-333417 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-333417 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-333417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-333417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-333417: (2.091985731s)
--- PASS: TestCertOptions (28.47s)

                                                
                                    
x
+
TestCertExpiration (227.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-657463 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-657463 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.423227001s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-657463 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-657463 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (19.707783268s)
helpers_test.go:175: Cleaning up "cert-expiration-657463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-657463
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-657463: (2.208952226s)
--- PASS: TestCertExpiration (227.34s)

                                                
                                    
x
+
TestDockerFlags (27.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-387502 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-387502 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.721745536s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-387502 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-387502 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-387502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-387502
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-387502: (2.036243007s)
--- PASS: TestDockerFlags (27.27s)

                                                
                                    
x
+
TestForceSystemdFlag (30.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-463322 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-463322 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.768818255s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-463322 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-463322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-463322
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-463322: (2.089645265s)
--- PASS: TestForceSystemdFlag (30.17s)

                                                
                                    
x
+
TestForceSystemdEnv (27.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-778062 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-778062 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.18557853s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-778062 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-778062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-778062
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-778062: (2.077744236s)
--- PASS: TestForceSystemdEnv (27.56s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.14s)

                                                
                                    
x
+
TestErrorSpam/setup (23.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-095123 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-095123 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-095123 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-095123 --driver=docker  --container-runtime=docker: (23.479914037s)
--- PASS: TestErrorSpam/setup (23.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 pause
--- PASS: TestErrorSpam/pause (1.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (10.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 stop: (10.625441928s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095123 --log_dir /tmp/nospam-095123 stop
--- PASS: TestErrorSpam/stop (10.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19648-8091/.minikube/files/etc/test/nested/copy/14840/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-229304 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m2.931866113s)
--- PASS: TestFunctional/serial/StartWithProxy (62.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-229304 --alsologtostderr -v=8: (32.13632283s)
functional_test.go:663: soft start took 32.137080547s for "functional-229304" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-229304 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-229304 /tmp/TestFunctionalserialCacheCmdcacheadd_local919590191/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache add minikube-local-cache-test:functional-229304
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache delete minikube-local-cache-test:functional-229304
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-229304
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (252.032285ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 kubectl -- --context functional-229304 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-229304 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-229304 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.525046053s)
functional_test.go:761: restart took 40.52517397s for "functional-229304" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-229304 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 logs
--- PASS: TestFunctional/serial/LogsCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 logs --file /tmp/TestFunctionalserialLogsFileCmd3682854469/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-229304 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-229304
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-229304: exit status 115 (307.010085ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31250 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-229304 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-229304 delete -f testdata/invalidsvc.yaml: (1.001609551s)
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 config get cpus: exit status 14 (96.220737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 config get cpus: exit status 14 (48.953808ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-229304 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-229304 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 65929: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-229304 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (175.050158ms)

                                                
                                                
-- stdout --
	* [functional-229304] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 08:54:55.201559   64339 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:54:55.201922   64339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:54:55.201938   64339 out.go:358] Setting ErrFile to fd 2...
	I0917 08:54:55.201944   64339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:54:55.202214   64339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 08:54:55.202906   64339 out.go:352] Setting JSON to false
	I0917 08:54:55.204274   64339 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2246,"bootTime":1726561049,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:54:55.204401   64339 start.go:139] virtualization: kvm guest
	I0917 08:54:55.206544   64339 out.go:177] * [functional-229304] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:54:55.207894   64339 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:54:55.207904   64339 notify.go:220] Checking for updates...
	I0917 08:54:55.210136   64339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:54:55.211658   64339 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:54:55.213267   64339 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	I0917 08:54:55.214461   64339 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:54:55.216332   64339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:54:55.218528   64339 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:54:55.219240   64339 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:54:55.249596   64339 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:54:55.249681   64339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:54:55.319982   64339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:54:55.308930375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:54:55.320115   64339 docker.go:318] overlay module found
	I0917 08:54:55.322225   64339 out.go:177] * Using the docker driver based on existing profile
	I0917 08:54:55.323578   64339 start.go:297] selected driver: docker
	I0917 08:54:55.323593   64339 start.go:901] validating driver "docker" against &{Name:functional-229304 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-229304 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:54:55.323697   64339 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:54:55.326035   64339 out.go:201] 
	W0917 08:54:55.327336   64339 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 08:54:55.328572   64339 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-229304 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-229304 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (176.664605ms)

                                                
                                                
-- stdout --
	* [functional-229304] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 08:54:55.036666   64113 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:54:55.036862   64113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:54:55.036874   64113 out.go:358] Setting ErrFile to fd 2...
	I0917 08:54:55.036880   64113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:54:55.037271   64113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 08:54:55.037847   64113 out.go:352] Setting JSON to false
	I0917 08:54:55.038945   64113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2246,"bootTime":1726561049,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:54:55.039049   64113 start.go:139] virtualization: kvm guest
	I0917 08:54:55.040747   64113 out.go:177] * [functional-229304] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 08:54:55.042784   64113 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:54:55.042806   64113 notify.go:220] Checking for updates...
	I0917 08:54:55.045756   64113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:54:55.047296   64113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	I0917 08:54:55.048577   64113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	I0917 08:54:55.049866   64113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:54:55.050995   64113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:54:55.052536   64113 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:54:55.053162   64113 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:54:55.091880   64113 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:54:55.092031   64113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:54:55.147287   64113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:54:55.137970037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:54:55.147383   64113 docker.go:318] overlay module found
	I0917 08:54:55.149160   64113 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 08:54:55.150198   64113 start.go:297] selected driver: docker
	I0917 08:54:55.150210   64113 start.go:901] validating driver "docker" against &{Name:functional-229304 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-229304 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:54:55.150282   64113 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:54:55.152351   64113 out.go:201] 
	W0917 08:54:55.153579   64113 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 08:54:55.154779   64113 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-229304 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-229304 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cll8n" [132c254a-500a-4593-891a-d6d40e316f52] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cll8n" [132c254a-500a-4593-891a-d6d40e316f52] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003304824s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30134
functional_test.go:1675: http://192.168.49.2:30134: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-cll8n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30134
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [deca4306-17f5-43ec-b134-5c95cfbf4c1a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011556168s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-229304 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-229304 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-229304 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-229304 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5397e29c-2ae5-4bbd-8273-8ce7f9bbbf51] Pending
helpers_test.go:344: "sp-pod" [5397e29c-2ae5-4bbd-8273-8ce7f9bbbf51] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5397e29c-2ae5-4bbd-8273-8ce7f9bbbf51] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.003877412s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-229304 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-229304 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-229304 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa6259c6-04a9-49c5-9494-ba494de5f56c] Pending
helpers_test.go:344: "sp-pod" [fa6259c6-04a9-49c5-9494-ba494de5f56c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003370851s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-229304 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh -n functional-229304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cp functional-229304:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2529999018/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh -n functional-229304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh -n functional-229304 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-229304 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9d8bt" [821285c2-1069-4f19-ac9c-512a6eeb54e8] Pending
helpers_test.go:344: "mysql-6cdb49bbb-9d8bt" [821285c2-1069-4f19-ac9c-512a6eeb54e8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9d8bt" [821285c2-1069-4f19-ac9c-512a6eeb54e8] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003709566s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;": exit status 1 (125.396479ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;": exit status 1 (107.335113ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;": exit status 1 (102.255963ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-229304 exec mysql-6cdb49bbb-9d8bt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14840/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /etc/test/nested/copy/14840/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14840.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /etc/ssl/certs/14840.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14840.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /usr/share/ca-certificates/14840.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/148402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /etc/ssl/certs/148402.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/148402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /usr/share/ca-certificates/148402.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-229304 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh "sudo systemctl is-active crio": exit status 1 (346.337472ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-229304 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-229304 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-tk2pj" [32d70ceb-9fbc-4ffd-b415-671156c54990] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-tk2pj" [32d70ceb-9fbc-4ffd-b415-671156c54990] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004147395s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "313.653766ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.949847ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "352.909201ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.483419ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-229304 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-229304
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-229304
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-229304 image ls --format short --alsologtostderr:
I0917 08:55:17.155143   72023 out.go:345] Setting OutFile to fd 1 ...
I0917 08:55:17.155236   72023 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.155243   72023 out.go:358] Setting ErrFile to fd 2...
I0917 08:55:17.155247   72023 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.155431   72023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
I0917 08:55:17.156003   72023 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.156091   72023 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.156453   72023 cli_runner.go:164] Run: docker container inspect functional-229304 --format={{.State.Status}}
I0917 08:55:17.172905   72023 ssh_runner.go:195] Run: systemctl --version
I0917 08:55:17.173003   72023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-229304
I0917 08:55:17.191820   72023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/functional-229304/id_rsa Username:docker}
I0917 08:55:17.289604   72023 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-229304 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-229304 | 1f2d76ea4fc75 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kicbase/echo-server               | functional-229304 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-229304 | 904bf5704e24a | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-229304 image ls --format table --alsologtostderr:
I0917 08:55:20.854190   72670 out.go:345] Setting OutFile to fd 1 ...
I0917 08:55:20.854281   72670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:20.854288   72670 out.go:358] Setting ErrFile to fd 2...
I0917 08:55:20.854292   72670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:20.854452   72670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
I0917 08:55:20.855014   72670 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:20.855106   72670 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:20.855472   72670 cli_runner.go:164] Run: docker container inspect functional-229304 --format={{.State.Status}}
I0917 08:55:20.871333   72670 ssh_runner.go:195] Run: systemctl --version
I0917 08:55:20.871378   72670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-229304
I0917 08:55:20.894568   72670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/functional-229304/id_rsa Username:docker}
I0917 08:55:21.077633   72670 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-229304 image ls --format json --alsologtostderr:
[{"id":"904bf5704e24a906c5909e362058dba04fec5ec0b49b35ab4a8b60de1da64afa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-229304"],"size":"30"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-229304"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed9
1ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc
1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4
400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-229304 image ls --format json --alsologtostderr:
I0917 08:55:20.559208   72551 out.go:345] Setting OutFile to fd 1 ...
I0917 08:55:20.559315   72551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:20.559324   72551 out.go:358] Setting ErrFile to fd 2...
I0917 08:55:20.559328   72551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:20.559490   72551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
I0917 08:55:20.560035   72551 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:20.560154   72551 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:20.560535   72551 cli_runner.go:164] Run: docker container inspect functional-229304 --format={{.State.Status}}
I0917 08:55:20.578974   72551 ssh_runner.go:195] Run: systemctl --version
I0917 08:55:20.579032   72551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-229304
I0917 08:55:20.598663   72551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/functional-229304/id_rsa Username:docker}
I0917 08:55:20.778700   72551 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-229304 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 904bf5704e24a906c5909e362058dba04fec5ec0b49b35ab4a8b60de1da64afa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-229304
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-229304
size: "4940000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-229304 image ls --format yaml --alsologtostderr:
I0917 08:55:17.367839   72073 out.go:345] Setting OutFile to fd 1 ...
I0917 08:55:17.367945   72073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.367954   72073 out.go:358] Setting ErrFile to fd 2...
I0917 08:55:17.367958   72073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.368147   72073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
I0917 08:55:17.368799   72073 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.368896   72073 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.369305   72073 cli_runner.go:164] Run: docker container inspect functional-229304 --format={{.State.Status}}
I0917 08:55:17.386445   72073 ssh_runner.go:195] Run: systemctl --version
I0917 08:55:17.386509   72073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-229304
I0917 08:55:17.407082   72073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/functional-229304/id_rsa Username:docker}
I0917 08:55:17.505971   72073 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh pgrep buildkitd: exit status 1 (268.39361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image build -t localhost/my-image:functional-229304 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-229304 image build -t localhost/my-image:functional-229304 testdata/build --alsologtostderr: (3.377616158s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-229304 image build -t localhost/my-image:functional-229304 testdata/build --alsologtostderr:
I0917 08:55:17.851267   72216 out.go:345] Setting OutFile to fd 1 ...
I0917 08:55:17.851415   72216 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.851425   72216 out.go:358] Setting ErrFile to fd 2...
I0917 08:55:17.851429   72216 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 08:55:17.851701   72216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
I0917 08:55:17.852407   72216 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.852974   72216 config.go:182] Loaded profile config "functional-229304": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 08:55:17.853440   72216 cli_runner.go:164] Run: docker container inspect functional-229304 --format={{.State.Status}}
I0917 08:55:17.875794   72216 ssh_runner.go:195] Run: systemctl --version
I0917 08:55:17.875853   72216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-229304
I0917 08:55:17.893821   72216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/functional-229304/id_rsa Username:docker}
I0917 08:55:17.993503   72216 build_images.go:161] Building image from path: /tmp/build.404054246.tar
I0917 08:55:17.993575   72216 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 08:55:18.002510   72216 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.404054246.tar
I0917 08:55:18.006289   72216 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.404054246.tar: stat -c "%s %y" /var/lib/minikube/build/build.404054246.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.404054246.tar': No such file or directory
I0917 08:55:18.006322   72216 ssh_runner.go:362] scp /tmp/build.404054246.tar --> /var/lib/minikube/build/build.404054246.tar (3072 bytes)
I0917 08:55:18.031575   72216 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.404054246
I0917 08:55:18.041809   72216 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.404054246 -xf /var/lib/minikube/build/build.404054246.tar
I0917 08:55:18.073206   72216 docker.go:360] Building image: /var/lib/minikube/build/build.404054246
I0917 08:55:18.073320   72216 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-229304 /var/lib/minikube/build/build.404054246
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.3s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.3s done
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:1f2d76ea4fc7534f3f42c3655d51f6277d380f111df21d0ab86550b68812bafe done
#8 naming to localhost/my-image:functional-229304 done
#8 DONE 0.1s
I0917 08:55:21.095562   72216 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-229304 /var/lib/minikube/build/build.404054246: (3.022206198s)
I0917 08:55:21.095626   72216 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.404054246
I0917 08:55:21.106349   72216 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.404054246.tar
I0917 08:55:21.174423   72216 build_images.go:217] Built localhost/my-image:functional-229304 from /tmp/build.404054246.tar
I0917 08:55:21.174464   72216 build_images.go:133] succeeded building to: functional-229304
I0917 08:55:21.174471   72216 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-229304
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdany-port3756938516/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726563297039400379" to /tmp/TestFunctionalparallelMountCmdany-port3756938516/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726563297039400379" to /tmp/TestFunctionalparallelMountCmdany-port3756938516/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726563297039400379" to /tmp/TestFunctionalparallelMountCmdany-port3756938516/001/test-1726563297039400379
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.803182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 08:54 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 08:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 08:54 test-1726563297039400379
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh cat /mount-9p/test-1726563297039400379
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-229304 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9e6b0c3e-1851-4635-b0f3-7ce167a77cf7] Pending
helpers_test.go:344: "busybox-mount" [9e6b0c3e-1851-4635-b0f3-7ce167a77cf7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9e6b0c3e-1851-4635-b0f3-7ce167a77cf7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9e6b0c3e-1851-4635-b0f3-7ce167a77cf7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.004243639s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-229304 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdany-port3756938516/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image load --daemon kicbase/echo-server:functional-229304 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image load --daemon kicbase/echo-server:functional-229304 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-229304
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image load --daemon kicbase/echo-server:functional-229304 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image save kicbase/echo-server:functional-229304 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image rm kicbase/echo-server:functional-229304 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-229304
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 image save --daemon kicbase/echo-server:functional-229304 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-229304
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-229304 docker-env) && out/minikube-linux-amd64 status -p functional-229304"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-229304 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service list -o json
functional_test.go:1494: Took "481.963999ms" to run "out/minikube-linux-amd64 -p functional-229304 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service --namespace=default --https --url hello-node
2024/09/17 08:55:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1522: found endpoint: https://192.168.49.2:31336
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31336
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 69134: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-229304 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d55be3cb-cd97-4589-9d09-134228083a73] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d55be3cb-cd97-4589-9d09-134228083a73] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004022623s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdspecific-port1138166558/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.047515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdspecific-port1138166558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh "sudo umount -f /mount-9p": exit status 1 (387.394248ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-229304 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdspecific-port1138166558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T" /mount1: exit status 1 (340.958079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-229304 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-229304 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-229304 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472113973/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-229304 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.152.27 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-229304 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-229304
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-229304
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-229304
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (93.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064913 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 08:56:44.571088   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.578173   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.589635   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.610956   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.652978   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.734458   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:44.896292   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:45.217831   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:45.859344   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:47.141626   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:49.704493   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:56:54.826768   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:57:05.068897   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-064913 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.60140785s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (93.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-064913 -- rollout status deployment/busybox: (2.66538464s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-4x4t4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-5qnwl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-lfhwg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-4x4t4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-5qnwl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-lfhwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-4x4t4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-5qnwl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-lfhwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-4x4t4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-4x4t4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-5qnwl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-5qnwl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-lfhwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-064913 -- exec busybox-7dff88458-lfhwg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-064913 -v=7 --alsologtostderr
E0917 08:57:25.550945   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-064913 -v=7 --alsologtostderr: (19.279425554s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-064913 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp testdata/cp-test.txt ha-064913:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1607626591/001/cp-test_ha-064913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913:/home/docker/cp-test.txt ha-064913-m02:/home/docker/cp-test_ha-064913_ha-064913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test_ha-064913_ha-064913-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913:/home/docker/cp-test.txt ha-064913-m03:/home/docker/cp-test_ha-064913_ha-064913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test_ha-064913_ha-064913-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913:/home/docker/cp-test.txt ha-064913-m04:/home/docker/cp-test_ha-064913_ha-064913-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test_ha-064913_ha-064913-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp testdata/cp-test.txt ha-064913-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1607626591/001/cp-test_ha-064913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m02:/home/docker/cp-test.txt ha-064913:/home/docker/cp-test_ha-064913-m02_ha-064913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test_ha-064913-m02_ha-064913.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m02:/home/docker/cp-test.txt ha-064913-m03:/home/docker/cp-test_ha-064913-m02_ha-064913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test_ha-064913-m02_ha-064913-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m02:/home/docker/cp-test.txt ha-064913-m04:/home/docker/cp-test_ha-064913-m02_ha-064913-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test_ha-064913-m02_ha-064913-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp testdata/cp-test.txt ha-064913-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1607626591/001/cp-test_ha-064913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m03:/home/docker/cp-test.txt ha-064913:/home/docker/cp-test_ha-064913-m03_ha-064913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test_ha-064913-m03_ha-064913.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m03:/home/docker/cp-test.txt ha-064913-m02:/home/docker/cp-test_ha-064913-m03_ha-064913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test_ha-064913-m03_ha-064913-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m03:/home/docker/cp-test.txt ha-064913-m04:/home/docker/cp-test_ha-064913-m03_ha-064913-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test_ha-064913-m03_ha-064913-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp testdata/cp-test.txt ha-064913-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1607626591/001/cp-test_ha-064913-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m04:/home/docker/cp-test.txt ha-064913:/home/docker/cp-test_ha-064913-m04_ha-064913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913 "sudo cat /home/docker/cp-test_ha-064913-m04_ha-064913.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m04:/home/docker/cp-test.txt ha-064913-m02:/home/docker/cp-test_ha-064913-m04_ha-064913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m02 "sudo cat /home/docker/cp-test_ha-064913-m04_ha-064913-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 cp ha-064913-m04:/home/docker/cp-test.txt ha-064913-m03:/home/docker/cp-test_ha-064913-m04_ha-064913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 ssh -n ha-064913-m03 "sudo cat /home/docker/cp-test_ha-064913-m04_ha-064913-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-064913 node stop m02 -v=7 --alsologtostderr: (10.63302883s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr: exit status 7 (626.303191ms)

                                                
                                                
-- stdout --
	ha-064913
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064913-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064913-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064913-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 08:58:04.767758  100576 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:58:04.767868  100576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:58:04.767877  100576 out.go:358] Setting ErrFile to fd 2...
	I0917 08:58:04.767881  100576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:58:04.768058  100576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 08:58:04.768242  100576 out.go:352] Setting JSON to false
	I0917 08:58:04.768278  100576 mustload.go:65] Loading cluster: ha-064913
	I0917 08:58:04.768320  100576 notify.go:220] Checking for updates...
	I0917 08:58:04.768842  100576 config.go:182] Loaded profile config "ha-064913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 08:58:04.768862  100576 status.go:255] checking status of ha-064913 ...
	I0917 08:58:04.769324  100576 cli_runner.go:164] Run: docker container inspect ha-064913 --format={{.State.Status}}
	I0917 08:58:04.787784  100576 status.go:330] ha-064913 host status = "Running" (err=<nil>)
	I0917 08:58:04.787817  100576 host.go:66] Checking if "ha-064913" exists ...
	I0917 08:58:04.788050  100576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064913
	I0917 08:58:04.805089  100576 host.go:66] Checking if "ha-064913" exists ...
	I0917 08:58:04.805399  100576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:58:04.805454  100576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064913
	I0917 08:58:04.823526  100576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/ha-064913/id_rsa Username:docker}
	I0917 08:58:04.913478  100576 ssh_runner.go:195] Run: systemctl --version
	I0917 08:58:04.917082  100576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:58:04.927025  100576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:58:04.973004  100576 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-17 08:58:04.963667866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:58:04.973580  100576 kubeconfig.go:125] found "ha-064913" server: "https://192.168.49.254:8443"
	I0917 08:58:04.973610  100576 api_server.go:166] Checking apiserver status ...
	I0917 08:58:04.973656  100576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:58:04.984327  100576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2427/cgroup
	I0917 08:58:04.994384  100576 api_server.go:182] apiserver freezer: "7:freezer:/docker/7811e7b780d72f5a01dc7faacf129e0cdd569df4d72c6e1f47dda960b335b226/kubepods/burstable/pod5b902d0f9d185538ff65b7b8655578e2/a254ffec2af4fb4693458092d3d320bfa970ed80b95bec613c04b78e30ac7959"
	I0917 08:58:04.994442  100576 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7811e7b780d72f5a01dc7faacf129e0cdd569df4d72c6e1f47dda960b335b226/kubepods/burstable/pod5b902d0f9d185538ff65b7b8655578e2/a254ffec2af4fb4693458092d3d320bfa970ed80b95bec613c04b78e30ac7959/freezer.state
	I0917 08:58:05.002028  100576 api_server.go:204] freezer state: "THAWED"
	I0917 08:58:05.002058  100576 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 08:58:05.005483  100576 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 08:58:05.005506  100576 status.go:422] ha-064913 apiserver status = Running (err=<nil>)
	I0917 08:58:05.005516  100576 status.go:257] ha-064913 status: &{Name:ha-064913 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 08:58:05.005531  100576 status.go:255] checking status of ha-064913-m02 ...
	I0917 08:58:05.005759  100576 cli_runner.go:164] Run: docker container inspect ha-064913-m02 --format={{.State.Status}}
	I0917 08:58:05.022638  100576 status.go:330] ha-064913-m02 host status = "Stopped" (err=<nil>)
	I0917 08:58:05.022657  100576 status.go:343] host is not running, skipping remaining checks
	I0917 08:58:05.022663  100576 status.go:257] ha-064913-m02 status: &{Name:ha-064913-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 08:58:05.022681  100576 status.go:255] checking status of ha-064913-m03 ...
	I0917 08:58:05.022920  100576 cli_runner.go:164] Run: docker container inspect ha-064913-m03 --format={{.State.Status}}
	I0917 08:58:05.039198  100576 status.go:330] ha-064913-m03 host status = "Running" (err=<nil>)
	I0917 08:58:05.039219  100576 host.go:66] Checking if "ha-064913-m03" exists ...
	I0917 08:58:05.039454  100576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064913-m03
	I0917 08:58:05.055635  100576 host.go:66] Checking if "ha-064913-m03" exists ...
	I0917 08:58:05.055908  100576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:58:05.055949  100576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064913-m03
	I0917 08:58:05.072242  100576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/ha-064913-m03/id_rsa Username:docker}
	I0917 08:58:05.161459  100576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:58:05.171604  100576 kubeconfig.go:125] found "ha-064913" server: "https://192.168.49.254:8443"
	I0917 08:58:05.171635  100576 api_server.go:166] Checking apiserver status ...
	I0917 08:58:05.171666  100576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:58:05.181680  100576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2256/cgroup
	I0917 08:58:05.189530  100576 api_server.go:182] apiserver freezer: "7:freezer:/docker/b29d19184c56782aaa89048fb87c55c9765138f96749c8646a86cd187961f41f/kubepods/burstable/pod40868de5e1acbc15f1813f8015cd5fb3/98105b319d20943e52a32003ffbd18ee6e86c7f118ae207d9e5ff126302f9b90"
	I0917 08:58:05.189574  100576 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b29d19184c56782aaa89048fb87c55c9765138f96749c8646a86cd187961f41f/kubepods/burstable/pod40868de5e1acbc15f1813f8015cd5fb3/98105b319d20943e52a32003ffbd18ee6e86c7f118ae207d9e5ff126302f9b90/freezer.state
	I0917 08:58:05.196650  100576 api_server.go:204] freezer state: "THAWED"
	I0917 08:58:05.196678  100576 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 08:58:05.200354  100576 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 08:58:05.200380  100576 status.go:422] ha-064913-m03 apiserver status = Running (err=<nil>)
	I0917 08:58:05.200391  100576 status.go:257] ha-064913-m03 status: &{Name:ha-064913-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 08:58:05.200413  100576 status.go:255] checking status of ha-064913-m04 ...
	I0917 08:58:05.200718  100576 cli_runner.go:164] Run: docker container inspect ha-064913-m04 --format={{.State.Status}}
	I0917 08:58:05.217356  100576 status.go:330] ha-064913-m04 host status = "Running" (err=<nil>)
	I0917 08:58:05.217379  100576 host.go:66] Checking if "ha-064913-m04" exists ...
	I0917 08:58:05.217596  100576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064913-m04
	I0917 08:58:05.233723  100576 host.go:66] Checking if "ha-064913-m04" exists ...
	I0917 08:58:05.233949  100576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:58:05.233979  100576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064913-m04
	I0917 08:58:05.252538  100576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/ha-064913-m04/id_rsa Username:docker}
	I0917 08:58:05.341281  100576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:58:05.351200  100576 status.go:257] ha-064913-m04 status: &{Name:ha-064913-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 node start m02 -v=7 --alsologtostderr
E0917 08:58:06.512867   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-064913 node start m02 -v=7 --alsologtostderr: (20.889356242s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr: (1.150751955s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.881537024s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (225.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-064913 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-064913 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-064913 -v=7 --alsologtostderr: (33.652920116s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064913 --wait=true -v=7 --alsologtostderr
E0917 08:59:28.434453   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.806414   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.812744   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.824121   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.845434   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.886787   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:54.968646   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:55.130179   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:55.451627   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:56.093904   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:57.376112   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 08:59:59.938244   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:00:05.060220   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:00:15.301911   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:00:35.783298   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:16.745492   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:44.571080   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:02:12.276664   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-064913 --wait=true -v=7 --alsologtostderr: (3m11.964648984s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-064913
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (225.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-064913 node delete m03 -v=7 --alsologtostderr: (8.376618544s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 stop -v=7 --alsologtostderr
E0917 09:02:38.667174   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-064913 stop -v=7 --alsologtostderr: (32.246774623s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr: exit status 7 (90.645001ms)

                                                
                                                
-- stdout --
	ha-064913
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064913-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064913-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:02:58.335767  132104 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:02:58.336001  132104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:02:58.336008  132104 out.go:358] Setting ErrFile to fd 2...
	I0917 09:02:58.336012  132104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:02:58.336157  132104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 09:02:58.336316  132104 out.go:352] Setting JSON to false
	I0917 09:02:58.336340  132104 mustload.go:65] Loading cluster: ha-064913
	I0917 09:02:58.336374  132104 notify.go:220] Checking for updates...
	I0917 09:02:58.336909  132104 config.go:182] Loaded profile config "ha-064913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:02:58.336925  132104 status.go:255] checking status of ha-064913 ...
	I0917 09:02:58.337308  132104 cli_runner.go:164] Run: docker container inspect ha-064913 --format={{.State.Status}}
	I0917 09:02:58.354323  132104 status.go:330] ha-064913 host status = "Stopped" (err=<nil>)
	I0917 09:02:58.354366  132104 status.go:343] host is not running, skipping remaining checks
	I0917 09:02:58.354381  132104 status.go:257] ha-064913 status: &{Name:ha-064913 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:02:58.354425  132104 status.go:255] checking status of ha-064913-m02 ...
	I0917 09:02:58.354787  132104 cli_runner.go:164] Run: docker container inspect ha-064913-m02 --format={{.State.Status}}
	I0917 09:02:58.370161  132104 status.go:330] ha-064913-m02 host status = "Stopped" (err=<nil>)
	I0917 09:02:58.370178  132104 status.go:343] host is not running, skipping remaining checks
	I0917 09:02:58.370184  132104 status.go:257] ha-064913-m02 status: &{Name:ha-064913-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:02:58.370199  132104 status.go:255] checking status of ha-064913-m04 ...
	I0917 09:02:58.370409  132104 cli_runner.go:164] Run: docker container inspect ha-064913-m04 --format={{.State.Status}}
	I0917 09:02:58.385811  132104 status.go:330] ha-064913-m04 host status = "Stopped" (err=<nil>)
	I0917 09:02:58.385829  132104 status.go:343] host is not running, skipping remaining checks
	I0917 09:02:58.385835  132104 status.go:257] ha-064913-m04 status: &{Name:ha-064913-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-064913 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-064913 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m19.360812705s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (31.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-064913 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-064913 --control-plane -v=7 --alsologtostderr: (30.706813653s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-064913 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (31.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-521261 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-521261 --driver=docker  --container-runtime=docker: (20.475522604s)
--- PASS: TestImageBuild/serial/Setup (20.48s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-521261
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-521261: (1.11403739s)
--- PASS: TestImageBuild/serial/NormalBuild (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-521261
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-521261
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-521261
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (33.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-080617 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-080617 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (33.180659782s)
--- PASS: TestJSONOutput/start/Command (33.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-080617 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-080617 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-080617 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-080617 --output=json --user=testUser: (10.711679194s)
--- PASS: TestJSONOutput/stop/Command (10.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-320852 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-320852 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.150643ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2aa40f25-e49b-4f6d-a40a-1e806706a8df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-320852] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f814246-ba15-4f1d-b410-0980cb3fd331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"99f30610-161d-4393-90da-d681d3f7b2e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"829630ef-4ba4-435d-b8de-4d79ebae5424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig"}}
	{"specversion":"1.0","id":"2ea81cb7-6758-445e-9ae4-d92698334f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube"}}
	{"specversion":"1.0","id":"583418c2-35c6-4b7f-a957-a512e788d819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"81a013f8-6720-42f8-ae47-a71b04e035d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8a806b88-4cbb-4862-a9da-7d0172232d6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-320852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-320852
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-523684 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-523684 --network=: (23.555062354s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-523684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-523684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-523684: (1.963941766s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-883787 --network=bridge
E0917 09:06:44.572850   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-883787 --network=bridge: (23.408259062s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-883787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-883787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-883787: (1.854536055s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.28s)

                                                
                                    
x
+
TestKicExistingNetwork (21.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-246211 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-246211 --network=existing-network: (19.982804885s)
helpers_test.go:175: Cleaning up "existing-network-246211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-246211
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-246211: (1.781815409s)
--- PASS: TestKicExistingNetwork (21.90s)

                                                
                                    
x
+
TestKicCustomSubnet (22.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-429848 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-429848 --subnet=192.168.60.0/24: (20.629564637s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-429848 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-429848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-429848
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-429848: (1.914368983s)
--- PASS: TestKicCustomSubnet (22.56s)

                                                
                                    
x
+
TestKicStaticIP (22.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-424264 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-424264 --static-ip=192.168.200.200: (20.27192571s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-424264 ip
helpers_test.go:175: Cleaning up "static-ip-424264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-424264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-424264: (1.992308499s)
--- PASS: TestKicStaticIP (22.38s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-990392 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-990392 --driver=docker  --container-runtime=docker: (21.449699417s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-007816 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-007816 --driver=docker  --container-runtime=docker: (20.60562596s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-990392
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-007816
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-007816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-007816
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-007816: (1.938858456s)
helpers_test.go:175: Cleaning up "first-990392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-990392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-990392: (1.98913428s)
--- PASS: TestMinikubeProfile (47.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-864599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-864599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.213027976s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-864599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-877596 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-877596 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.165875679s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-877596 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-864599 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-864599 --alsologtostderr -v=5: (1.43065656s)
--- PASS: TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-877596 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-877596
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-877596: (1.158704589s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-877596
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-877596: (6.70920056s)
--- PASS: TestMountStart/serial/RestartStopped (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-877596 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701903 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 09:09:54.806772   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701903 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.835642769s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (51.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-701903 -- rollout status deployment/busybox: (2.53732784s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-6npw2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-dggl4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-6npw2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-dggl4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-6npw2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-dggl4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (51.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-6npw2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-6npw2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-dggl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701903 -- exec busybox-7dff88458-dggl4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-701903 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-701903 -v 3 --alsologtostderr: (15.47717405s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-701903 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp testdata/cp-test.txt multinode-701903:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034334160/001/cp-test_multinode-701903.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903:/home/docker/cp-test.txt multinode-701903-m02:/home/docker/cp-test_multinode-701903_multinode-701903-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test_multinode-701903_multinode-701903-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903:/home/docker/cp-test.txt multinode-701903-m03:/home/docker/cp-test_multinode-701903_multinode-701903-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test_multinode-701903_multinode-701903-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp testdata/cp-test.txt multinode-701903-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034334160/001/cp-test_multinode-701903-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m02:/home/docker/cp-test.txt multinode-701903:/home/docker/cp-test_multinode-701903-m02_multinode-701903.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test_multinode-701903-m02_multinode-701903.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m02:/home/docker/cp-test.txt multinode-701903-m03:/home/docker/cp-test_multinode-701903-m02_multinode-701903-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test_multinode-701903-m02_multinode-701903-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp testdata/cp-test.txt multinode-701903-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034334160/001/cp-test_multinode-701903-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m03:/home/docker/cp-test.txt multinode-701903:/home/docker/cp-test_multinode-701903-m03_multinode-701903.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903 "sudo cat /home/docker/cp-test_multinode-701903-m03_multinode-701903.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 cp multinode-701903-m03:/home/docker/cp-test.txt multinode-701903-m02:/home/docker/cp-test_multinode-701903-m03_multinode-701903-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 ssh -n multinode-701903-m02 "sudo cat /home/docker/cp-test_multinode-701903-m03_multinode-701903-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-701903 node stop m03: (1.160530214s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701903 status: exit status 7 (432.486796ms)

                                                
                                                
-- stdout --
	multinode-701903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701903-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701903-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr: exit status 7 (437.545505ms)

                                                
                                                
-- stdout --
	multinode-701903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701903-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701903-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:11:43.403583  217650 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:11:43.403842  217650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:11:43.403852  217650 out.go:358] Setting ErrFile to fd 2...
	I0917 09:11:43.403856  217650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:11:43.404026  217650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 09:11:43.404190  217650 out.go:352] Setting JSON to false
	I0917 09:11:43.404217  217650 mustload.go:65] Loading cluster: multinode-701903
	I0917 09:11:43.404337  217650 notify.go:220] Checking for updates...
	I0917 09:11:43.404590  217650 config.go:182] Loaded profile config "multinode-701903": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:11:43.404603  217650 status.go:255] checking status of multinode-701903 ...
	I0917 09:11:43.405002  217650 cli_runner.go:164] Run: docker container inspect multinode-701903 --format={{.State.Status}}
	I0917 09:11:43.422012  217650 status.go:330] multinode-701903 host status = "Running" (err=<nil>)
	I0917 09:11:43.422034  217650 host.go:66] Checking if "multinode-701903" exists ...
	I0917 09:11:43.422281  217650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701903
	I0917 09:11:43.438015  217650 host.go:66] Checking if "multinode-701903" exists ...
	I0917 09:11:43.438253  217650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:11:43.438298  217650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701903
	I0917 09:11:43.454632  217650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/multinode-701903/id_rsa Username:docker}
	I0917 09:11:43.545124  217650 ssh_runner.go:195] Run: systemctl --version
	I0917 09:11:43.548873  217650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:11:43.558374  217650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:11:43.605612  217650 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-17 09:11:43.596865165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:11:43.606114  217650 kubeconfig.go:125] found "multinode-701903" server: "https://192.168.67.2:8443"
	I0917 09:11:43.606140  217650 api_server.go:166] Checking apiserver status ...
	I0917 09:11:43.606170  217650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:11:43.616350  217650 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2378/cgroup
	I0917 09:11:43.624176  217650 api_server.go:182] apiserver freezer: "7:freezer:/docker/22698040e81f0ba96cf0417cf234878781ce13f0291dc8fe63fd84c48fa75b4f/kubepods/burstable/pod1c22a28208fee161f82e349761bc9756/361db87260ea87a74091d0cf8234337b299d229f6f47e6044e6b95f3bb807b0e"
	I0917 09:11:43.624231  217650 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/22698040e81f0ba96cf0417cf234878781ce13f0291dc8fe63fd84c48fa75b4f/kubepods/burstable/pod1c22a28208fee161f82e349761bc9756/361db87260ea87a74091d0cf8234337b299d229f6f47e6044e6b95f3bb807b0e/freezer.state
	I0917 09:11:43.631216  217650 api_server.go:204] freezer state: "THAWED"
	I0917 09:11:43.631238  217650 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 09:11:43.634782  217650 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 09:11:43.634799  217650 status.go:422] multinode-701903 apiserver status = Running (err=<nil>)
	I0917 09:11:43.634811  217650 status.go:257] multinode-701903 status: &{Name:multinode-701903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:11:43.634840  217650 status.go:255] checking status of multinode-701903-m02 ...
	I0917 09:11:43.635087  217650 cli_runner.go:164] Run: docker container inspect multinode-701903-m02 --format={{.State.Status}}
	I0917 09:11:43.651704  217650 status.go:330] multinode-701903-m02 host status = "Running" (err=<nil>)
	I0917 09:11:43.651731  217650 host.go:66] Checking if "multinode-701903-m02" exists ...
	I0917 09:11:43.652026  217650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701903-m02
	I0917 09:11:43.667985  217650 host.go:66] Checking if "multinode-701903-m02" exists ...
	I0917 09:11:43.668267  217650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:11:43.668299  217650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701903-m02
	I0917 09:11:43.683704  217650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19648-8091/.minikube/machines/multinode-701903-m02/id_rsa Username:docker}
	I0917 09:11:43.773151  217650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:11:43.783529  217650 status.go:257] multinode-701903-m02 status: &{Name:multinode-701903-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:11:43.783562  217650 status.go:255] checking status of multinode-701903-m03 ...
	I0917 09:11:43.783870  217650 cli_runner.go:164] Run: docker container inspect multinode-701903-m03 --format={{.State.Status}}
	I0917 09:11:43.800395  217650 status.go:330] multinode-701903-m03 host status = "Stopped" (err=<nil>)
	I0917 09:11:43.800411  217650 status.go:343] host is not running, skipping remaining checks
	I0917 09:11:43.800417  217650 status.go:257] multinode-701903-m03 status: &{Name:multinode-701903-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 node start m03 -v=7 --alsologtostderr
E0917 09:11:44.570327   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-701903 node start m03 -v=7 --alsologtostderr: (9.131878991s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701903
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-701903
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-701903: (22.370384503s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701903 --wait=true -v=8 --alsologtostderr
E0917 09:13:07.638451   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701903 --wait=true -v=8 --alsologtostderr: (1m18.590153492s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701903
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-701903 node delete m03: (4.530292762s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-701903 stop: (21.140138029s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701903 status: exit status 7 (74.866388ms)

                                                
                                                
-- stdout --
	multinode-701903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701903-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr: exit status 7 (79.069567ms)

                                                
                                                
-- stdout --
	multinode-701903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701903-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:14:00.930350  233136 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:14:00.930458  233136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:14:00.930466  233136 out.go:358] Setting ErrFile to fd 2...
	I0917 09:14:00.930470  233136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:14:00.930631  233136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-8091/.minikube/bin
	I0917 09:14:00.930779  233136 out.go:352] Setting JSON to false
	I0917 09:14:00.930807  233136 mustload.go:65] Loading cluster: multinode-701903
	I0917 09:14:00.930840  233136 notify.go:220] Checking for updates...
	I0917 09:14:00.931190  233136 config.go:182] Loaded profile config "multinode-701903": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:14:00.931206  233136 status.go:255] checking status of multinode-701903 ...
	I0917 09:14:00.931590  233136 cli_runner.go:164] Run: docker container inspect multinode-701903 --format={{.State.Status}}
	I0917 09:14:00.950745  233136 status.go:330] multinode-701903 host status = "Stopped" (err=<nil>)
	I0917 09:14:00.950767  233136 status.go:343] host is not running, skipping remaining checks
	I0917 09:14:00.950776  233136 status.go:257] multinode-701903 status: &{Name:multinode-701903 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:14:00.950861  233136 status.go:255] checking status of multinode-701903-m02 ...
	I0917 09:14:00.951203  233136 cli_runner.go:164] Run: docker container inspect multinode-701903-m02 --format={{.State.Status}}
	I0917 09:14:00.967023  233136 status.go:330] multinode-701903-m02 host status = "Stopped" (err=<nil>)
	I0917 09:14:00.967041  233136 status.go:343] host is not running, skipping remaining checks
	I0917 09:14:00.967047  233136 status.go:257] multinode-701903-m02 status: &{Name:multinode-701903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701903 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701903 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.454033427s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701903 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701903
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701903-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-701903-m02 --driver=docker  --container-runtime=docker: exit status 14 (56.622204ms)

                                                
                                                
-- stdout --
	* [multinode-701903-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-701903-m02' is duplicated with machine name 'multinode-701903-m02' in profile 'multinode-701903'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701903-m03 --driver=docker  --container-runtime=docker
E0917 09:14:54.806336   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701903-m03 --driver=docker  --container-runtime=docker: (23.807388015s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-701903
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-701903: exit status 80 (258.97809ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-701903 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-701903-m03 already exists in multinode-701903-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-701903-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-701903-m03: (1.902926477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.07s)

                                                
                                    
x
+
TestPreload (119.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0917 09:16:17.870108   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:44.570756   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m26.580734514s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344056 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-344056 image pull gcr.io/k8s-minikube/busybox: (1.370731955s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-344056
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-344056: (10.769934336s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344056 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344056 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (18.400716469s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344056 image list
helpers_test.go:175: Cleaning up "test-preload-344056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-344056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-344056: (2.022942062s)
--- PASS: TestPreload (119.34s)

                                                
                                    
x
+
TestScheduledStopUnix (93.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-077639 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-077639 --memory=2048 --driver=docker  --container-runtime=docker: (21.155156262s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-077639 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-077639 -n scheduled-stop-077639
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-077639 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-077639 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-077639 -n scheduled-stop-077639
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-077639
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-077639 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-077639
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-077639: exit status 7 (59.696196ms)

                                                
                                                
-- stdout --
	scheduled-stop-077639
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-077639 -n scheduled-stop-077639
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-077639 -n scheduled-stop-077639: exit status 7 (57.321965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-077639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-077639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-077639: (1.55427975s)
--- PASS: TestScheduledStopUnix (93.91s)

                                                
                                    
x
+
TestSkaffold (94.23s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3629325031 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-836835 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-836835 --memory=2600 --driver=docker  --container-runtime=docker: (20.666664171s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3629325031 run --minikube-profile skaffold-836835 --kube-context skaffold-836835 --status-check=true --port-forward=false --interactive=false
E0917 09:19:54.806978   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3629325031 run --minikube-profile skaffold-836835 --kube-context skaffold-836835 --status-check=true --port-forward=false --interactive=false: (59.307215606s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7b8d9f8d45-728nl" [6290ec1f-68c3-4d0a-ac36-ef752500fdd2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003885993s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-64b7f5f54-j8gk8" [4e1a8ace-239b-4e77-a523-842183113c15] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004011088s
helpers_test.go:175: Cleaning up "skaffold-836835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-836835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-836835: (2.611522615s)
--- PASS: TestSkaffold (94.23s)

                                                
                                    
x
+
TestInsufficientStorage (9.49s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-527404 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-527404 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.407818772s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45221489-aca9-4891-a6ff-f7287e3ab4fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-527404] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d3c61f8-8e54-4039-8e40-82ed63075022","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"69946582-1729-47f3-8f1b-67670333135e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79814aae-584d-4afc-9725-a2fa8e2e1616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig"}}
	{"specversion":"1.0","id":"26832df2-5e25-4fae-9605-48d9f89c3d06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube"}}
	{"specversion":"1.0","id":"d827590c-7e85-4353-b4d6-2d54534b38fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1451327d-f19f-4089-bee5-b2ebed10cf56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f2b55ad-f768-4606-af9e-4e3a806f6122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b47e966e-dc56-40dc-861a-834bc2feaff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5570713d-a6eb-4992-8510-b936566b21b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"71758673-a7d0-407d-9aaa-3481c407b034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b776cc68-e637-4133-8d9f-e975f7b06234","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-527404\" primary control-plane node in \"insufficient-storage-527404\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"42bbc42d-ddee-4fef-9429-174003845894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f96c723-a02a-4d21-8328-d2817329c7ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"03b36fdf-2b58-45cf-80b0-e1429f36b6fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-527404 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-527404 --output=json --layout=cluster: exit status 7 (245.24653ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-527404","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-527404","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 09:20:36.924802  273085 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-527404" does not appear in /home/jenkins/minikube-integration/19648-8091/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-527404 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-527404 --output=json --layout=cluster: exit status 7 (244.136595ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-527404","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-527404","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 09:20:37.169845  273183 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-527404" does not appear in /home/jenkins/minikube-integration/19648-8091/kubeconfig
	E0917 09:20:37.178888  273183 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/insufficient-storage-527404/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-527404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-527404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-527404: (1.594093607s)
--- PASS: TestInsufficientStorage (9.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4063695876 start -p running-upgrade-239190 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4063695876 start -p running-upgrade-239190 --memory=2200 --vm-driver=docker  --container-runtime=docker: (30.524082673s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-239190 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-239190 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.93568893s)
helpers_test.go:175: Cleaning up "running-upgrade-239190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-239190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-239190: (2.171128163s)
--- PASS: TestRunningBinaryUpgrade (65.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.307722403s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-119864
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-119864: (10.677698371s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-119864 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-119864 status --format={{.Host}}: exit status 7 (58.4888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 09:21:44.570934   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m27.042824878s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-119864 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (58.23583ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-119864] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-119864
	    minikube start -p kubernetes-upgrade-119864 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1198642 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-119864 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-119864 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.086226059s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-119864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-119864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-119864: (2.213489477s)
--- PASS: TestKubernetesUpgrade (344.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.93s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.477329995 start -p missing-upgrade-938416 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.477329995 start -p missing-upgrade-938416 --memory=2200 --driver=docker  --container-runtime=docker: (1m11.858558666s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-938416
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-938416: (12.804684843s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-938416
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-938416 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-938416 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.690508717s)
helpers_test.go:175: Cleaning up "missing-upgrade-938416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-938416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-938416: (2.091070407s)
--- PASS: TestMissingContainerUpgrade (136.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2687238869 start -p stopped-upgrade-177242 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2687238869 start -p stopped-upgrade-177242 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m17.473851717s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2687238869 -p stopped-upgrade-177242 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2687238869 -p stopped-upgrade-177242 stop: (10.862487702s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-177242 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-177242 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.346202823s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-177242
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-177242: (1.212657763s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (40.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-753491 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-753491 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (40.364077028s)
--- PASS: TestPause/serial/Start (40.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (66.35398ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-875940] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-8091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-8091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875940 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875940 --driver=docker  --container-runtime=docker: (25.905923465s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875940 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --driver=docker  --container-runtime=docker: (15.35176826s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875940 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-875940 status -o json: exit status 2 (251.67894ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-875940","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-875940
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-875940: (1.63223892s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-753491 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-753491 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.618707666s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875940 --no-kubernetes --driver=docker  --container-runtime=docker: (8.707689523s)
--- PASS: TestNoKubernetes/serial/Start (8.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.930094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.225491741s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-753491 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-753491 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-753491 --output=json --layout=cluster: exit status 2 (295.485591ms)

                                                
                                                
-- stdout --
	{"Name":"pause-753491","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-753491","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-753491 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-753491 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-875940
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-875940: (1.263399949s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-753491 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-753491 --alsologtostderr -v=5: (2.121927351s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875940 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875940 --driver=docker  --container-runtime=docker: (7.667420838s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-753491
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-753491: exit status 1 (15.036839ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-753491: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.49809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-563392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0917 09:24:54.806243   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-563392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m8.863195551s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-289048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 09:25:15.653348   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.659812   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.671266   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.692640   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.734059   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.815585   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:15.977749   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:16.299478   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:16.941046   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:18.222719   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:20.784999   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:25.906694   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:36.148129   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:25:56.629644   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-289048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m9.31879162s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-289048 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5543fb18-a064-443a-894d-bf725e969458] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5543fb18-a064-443a-894d-bf725e969458] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004083311s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-289048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-546017 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-546017 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m4.315213761s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-289048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-289048 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-289048 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-289048 --alsologtostderr -v=3: (10.765618429s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-289048 -n no-preload-289048
E0917 09:26:37.591510   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-289048 -n no-preload-289048: exit status 7 (110.009762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-289048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-289048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 09:26:44.570052   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-289048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.413683523s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-289048 -n no-preload-289048
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-563392 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cea6a4aa-a547-4017-878a-b4f34d27fe53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cea6a4aa-a547-4017-878a-b4f34d27fe53] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003190038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-563392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-563392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-563392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-563392 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-563392 --alsologtostderr -v=3: (10.794422549s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563392 -n old-k8s-version-563392
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563392 -n old-k8s-version-563392: exit status 7 (60.772753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-563392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (23.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-563392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-563392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (23.016203351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563392 -n old-k8s-version-563392
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (23.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (36.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-191579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-191579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (36.955558318s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (36.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-546017 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca066469-6b0d-4539-b0c8-157024243862] Pending
helpers_test.go:344: "busybox" [ca066469-6b0d-4539-b0c8-157024243862] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca066469-6b0d-4539-b0c8-157024243862] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004493827s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-546017 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-546017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-546017 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-546017 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-546017 --alsologtostderr -v=3: (10.791458504s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dmqrz" [374e40d4-95c3-49f8-a9ce-72aace6d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dmqrz" [374e40d4-95c3-49f8-a9ce-72aace6d2b83] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 29.003200121s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546017 -n embed-certs-546017
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546017 -n embed-certs-546017: exit status 7 (107.93561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-546017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (271.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-546017 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-546017 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m30.780947034s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546017 -n embed-certs-546017
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (271.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-191579 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f37d7a6-ade6-4759-9fda-93a71b42feae] Pending
helpers_test.go:344: "busybox" [6f37d7a6-ade6-4759-9fda-93a71b42feae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f37d7a6-ade6-4759-9fda-93a71b42feae] Running
E0917 09:27:59.512869   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003669247s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-191579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-191579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-191579 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-191579 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-191579 --alsologtostderr -v=3: (10.805089457s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dmqrz" [374e40d4-95c3-49f8-a9ce-72aace6d2b83] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004192482s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-563392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-563392 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-563392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563392 -n old-k8s-version-563392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563392 -n old-k8s-version-563392: exit status 2 (276.085277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563392 -n old-k8s-version-563392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563392 -n old-k8s-version-563392: exit status 2 (293.954887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-563392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563392 -n old-k8s-version-563392
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563392 -n old-k8s-version-563392
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579: exit status 7 (110.563733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-191579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-191579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-191579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m24.394745986s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (264.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-524847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-524847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (31.008999806s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-524847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-524847 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-524847 --alsologtostderr -v=3: (10.093070679s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-524847 -n newest-cni-524847
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-524847 -n newest-cni-524847: exit status 7 (115.249206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-524847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-524847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-524847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (13.539872922s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-524847 -n newest-cni-524847
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-524847 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-524847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-524847 -n newest-cni-524847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-524847 -n newest-cni-524847: exit status 2 (274.81978ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-524847 -n newest-cni-524847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-524847 -n newest-cni-524847: exit status 2 (270.847177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-524847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-524847 -n newest-cni-524847
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-524847 -n newest-cni-524847
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0917 09:29:47.640447   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:29:54.807176   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/functional-229304/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:30:15.652626   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/skaffold-836835/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m4.916804695s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dtkrd" [1b1d420f-fe14-444c-8f14-097e87f44a55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dtkrd" [1b1d420f-fe14-444c-8f14-097e87f44a55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003133769s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.801918814s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dlgxx" [ec675869-b73b-498e-93b6-1b5315ff29d4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112725s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dlgxx" [ec675869-b73b-498e-93b6-1b5315ff29d4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005165032s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-289048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-289048 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-289048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-289048 -n no-preload-289048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-289048 -n no-preload-289048: exit status 2 (287.576134ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-289048 -n no-preload-289048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-289048 -n no-preload-289048: exit status 2 (323.956574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-289048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-289048 -n no-preload-289048
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-289048 -n no-preload-289048
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0917 09:31:44.570975   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/addons-118348/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (58.616742048s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dd8kp" [6ea112a0-a21a-42f8-90d7-5817d20d03ff] Running
E0917 09:31:56.015981   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.022381   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.033720   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.055082   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.096519   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.178434   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.340159   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:56.661856   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004355433s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-108038 "pgrep -a kubelet"
E0917 09:31:57.303729   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tq7l2" [b31c822e-1186-4bdd-9f7f-2126fcefc328] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 09:31:58.585957   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:32:01.148224   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tq7l2" [b31c822e-1186-4bdd-9f7f-2126fcefc328] Running
E0917 09:32:06.269894   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003402708s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kncdh" [7f784ce8-999d-4380-8924-8d908b3da4e0] Running
E0917 09:32:16.512178   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004797543s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4rrfx" [d36ab88a-32d4-4966-b4d9-b0ebe042fe03] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004081166s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cfd2q" [26e58a86-6be4-4cd8-ab51-2e47f3d37201] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cfd2q" [26e58a86-6be4-4cd8-ab51-2e47f3d37201] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003661448s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4rrfx" [d36ab88a-32d4-4966-b4d9-b0ebe042fe03] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004185084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-546017 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.509905076s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-546017 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-546017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546017 -n embed-certs-546017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546017 -n embed-certs-546017: exit status 2 (295.275765ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-546017 -n embed-certs-546017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-546017 -n embed-certs-546017: exit status 2 (283.529848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-546017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546017 -n embed-certs-546017
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-546017 -n embed-certs-546017
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (63.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0917 09:32:36.993644   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m3.852243496s)
--- PASS: TestNetworkPlugins/group/false/Start (63.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b2prz" [26d4f93c-7ab5-4e32-ae4c-557efbccfa5f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004149605s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b2prz" [26d4f93c-7ab5-4e32-ae4c-557efbccfa5f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00373444s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-191579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m5.391466017s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-191579 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-191579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579: exit status 2 (349.580497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579: exit status 2 (354.231625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-191579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-191579 -n default-k8s-diff-port-191579
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0917 09:33:17.955878   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/old-k8s-version-563392/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (43.357927703s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v7chj" [86e60b12-1cb3-41e7-a63c-90390a37e390] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v7chj" [86e60b12-1cb3-41e7-a63c-90390a37e390] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003792427s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lq5b4" [7f843b00-11d9-4151-aec7-cf1dc15c6128] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lq5b4" [7f843b00-11d9-4151-aec7-cf1dc15c6128] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004392913s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gv5rm" [1e59a853-48bd-4b4a-9661-ed1e6aff8887] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005720177s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m9.230999589s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-689pb" [ddd0b759-f683-45cb-866a-cbf2936b2ca6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-689pb" [ddd0b759-f683-45cb-866a-cbf2936b2ca6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006381252s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwtwn" [acd02978-e386-4a19-bf76-a06e96f9bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pwtwn" [acd02978-e386-4a19-bf76-a06e96f9bc4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005301079s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (66.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-108038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m6.308019396s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (66.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b6j2f" [866effc4-cacb-4fba-9517-23aeccbcf01a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b6j2f" [866effc4-cacb-4fba-9517-23aeccbcf01a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004407269s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-108038 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-108038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h8z8c" [77b7a24c-d082-49cf-9502-26d525c091cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h8z8c" [77b7a24c-d082-49cf-9502-26d525c091cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004491211s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-108038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-108038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0917 09:35:26.924090   14840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/auto-108038/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-888589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-888589
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-108038 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-108038" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:23:55 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-657463
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-8091/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:21:47 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-119864
contexts:
- context:
cluster: cert-expiration-657463
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:23:55 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-657463
name: cert-expiration-657463
- context:
cluster: kubernetes-upgrade-119864
user: kubernetes-upgrade-119864
name: kubernetes-upgrade-119864
current-context: cert-expiration-657463
kind: Config
preferences: {}
users:
- name: cert-expiration-657463
user:
client-certificate: /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/cert-expiration-657463/client.crt
client-key: /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/cert-expiration-657463/client.key
- name: kubernetes-upgrade-119864
user:
client-certificate: /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/kubernetes-upgrade-119864/client.crt
client-key: /home/jenkins/minikube-integration/19648-8091/.minikube/profiles/kubernetes-upgrade-119864/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-108038

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-108038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-108038"

                                                
                                                
----------------------- debugLogs end: cilium-108038 [took: 3.054848946s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-108038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-108038
--- SKIP: TestNetworkPlugins/group/cilium (3.22s)

                                                
                                    
Copied to clipboard