Test Report: Docker_Linux 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.43
x
+
TestAddons/parallel/Registry (72.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.084123ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6df5h" [4849ea19-88f6-4fbc-ba0f-e290ee2d0d80] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003268843s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9tc94" [556af332-2257-4db0-adcb-aca469cf992d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0033912s
addons_test.go:342: (dbg) Run:  kubectl --context addons-539053 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-539053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-539053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.075801388s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-539053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 ip
2024/09/16 17:28:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-539053
helpers_test.go:235: (dbg) docker inspect addons-539053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2",
	        "Created": "2024-09-16T17:15:38.700663122Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 115009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T17:15:38.824683494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:42cce955f9eac9d57cd22fac71bb25240691d58509ec274149a0acd1eaaf86ec",
	        "ResolvConfPath": "/var/lib/docker/containers/889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2/hostname",
	        "HostsPath": "/var/lib/docker/containers/889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2/hosts",
	        "LogPath": "/var/lib/docker/containers/889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2/889620d9c22cb0f9876805e85238f1130b32e01550a250dffc5408f9e9ce0aa2-json.log",
	        "Name": "/addons-539053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-539053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-539053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7231bb84623acc78a861812a4b4c8152502c9e4d7dce91310005a19e64f4db52-init/diff:/var/lib/docker/overlay2/2bde1a12356e80260f13e7e04ea75070375a33ab4f42d4cfd7ba26956be5ad81/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7231bb84623acc78a861812a4b4c8152502c9e4d7dce91310005a19e64f4db52/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7231bb84623acc78a861812a4b4c8152502c9e4d7dce91310005a19e64f4db52/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7231bb84623acc78a861812a4b4c8152502c9e4d7dce91310005a19e64f4db52/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-539053",
	                "Source": "/var/lib/docker/volumes/addons-539053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-539053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-539053",
	                "name.minikube.sigs.k8s.io": "addons-539053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a3a40c5aa697f6cc4a86dbd6597cd54826a35038523171bb5932c1f033a7d1b",
	            "SandboxKey": "/var/run/docker/netns/5a3a40c5aa69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-539053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c1f2d25a0fa41d9a7ff6664261042de1aa064ad0e68b1c0e696a283eb2fe3d1a",
	                    "EndpointID": "8338c295eda910bb9c27a3e697e58ecbf98822a66eebdefbea4b53970f1b808b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-539053",
	                        "889620d9c22c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-539053 -n addons-539053
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-294705 | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC |                     |
	|         | download-docker-294705                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-294705                                                                   | download-docker-294705 | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-149473   | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC |                     |
	|         | binary-mirror-149473                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35485                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-149473                                                                     | binary-mirror-149473   | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:15 UTC |
	| addons  | disable dashboard -p                                                                        | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC |                     |
	|         | addons-539053                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC |                     |
	|         | addons-539053                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-539053 --wait=true                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:19 UTC | 16 Sep 24 17:19 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-539053 addons                                                                        | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:27 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-539053 ssh cat                                                                       | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:27 UTC |
	|         | /opt/local-path-provisioner/pvc-1389ca84-3e21-4c35-b54d-991231b2f504_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:27 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:27 UTC |
	|         | addons-539053                                                                               |                        |         |         |                     |                     |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:27 UTC | 16 Sep 24 17:28 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | -p addons-539053                                                                            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | addons-539053                                                                               |                        |         |         |                     |                     |
	| addons  | addons-539053 addons                                                                        | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | -p addons-539053                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-539053 addons                                                                        | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-539053 ssh curl -s                                                                   | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-539053 ip                                                                            | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-539053 ip                                                                            | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	| addons  | addons-539053 addons disable                                                                | addons-539053          | jenkins | v1.34.0 | 16 Sep 24 17:28 UTC | 16 Sep 24 17:28 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:15:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:15:17.090766  114275 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:15:17.090996  114275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:15:17.091004  114275 out.go:358] Setting ErrFile to fd 2...
	I0916 17:15:17.091009  114275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:15:17.091164  114275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:15:17.091733  114275 out.go:352] Setting JSON to false
	I0916 17:15:17.092573  114275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3457,"bootTime":1726503460,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:15:17.092667  114275 start.go:139] virtualization: kvm guest
	I0916 17:15:17.094545  114275 out.go:177] * [addons-539053] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:15:17.095682  114275 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:15:17.095695  114275 notify.go:220] Checking for updates...
	I0916 17:15:17.098006  114275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:15:17.099226  114275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:15:17.100384  114275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	I0916 17:15:17.101644  114275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:15:17.102862  114275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:15:17.104133  114275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:15:17.124455  114275 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 17:15:17.124588  114275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:15:17.170896  114275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-16 17:15:17.162568553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:15:17.171009  114275 docker.go:318] overlay module found
	I0916 17:15:17.172908  114275 out.go:177] * Using the docker driver based on user configuration
	I0916 17:15:17.174290  114275 start.go:297] selected driver: docker
	I0916 17:15:17.174305  114275 start.go:901] validating driver "docker" against <nil>
	I0916 17:15:17.174316  114275 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:15:17.175054  114275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:15:17.219084  114275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-16 17:15:17.210650968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:15:17.219289  114275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:15:17.219517  114275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:15:17.221316  114275 out.go:177] * Using Docker driver with root privileges
	I0916 17:15:17.222628  114275 cni.go:84] Creating CNI manager for ""
	I0916 17:15:17.222687  114275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:15:17.222700  114275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:15:17.222766  114275 start.go:340] cluster config:
	{Name:addons-539053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-539053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:15:17.224103  114275 out.go:177] * Starting "addons-539053" primary control-plane node in "addons-539053" cluster
	I0916 17:15:17.225181  114275 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 17:15:17.226408  114275 out.go:177] * Pulling base image v0.0.45-1726481311-19649 ...
	I0916 17:15:17.227644  114275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:15:17.227668  114275 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 17:15:17.227678  114275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 17:15:17.227685  114275 cache.go:56] Caching tarball of preloaded images
	I0916 17:15:17.227772  114275 preload.go:172] Found /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 17:15:17.227784  114275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 17:15:17.228171  114275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/config.json ...
	I0916 17:15:17.228198  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/config.json: {Name:mk2bb66488164ef9dcd50e32bedebb588655529e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:17.242358  114275 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 17:15:17.242460  114275 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 17:15:17.242473  114275 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 17:15:17.242477  114275 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 17:15:17.242484  114275 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 17:15:17.242491  114275 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from local cache
	I0916 17:15:29.445316  114275 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from cached tarball
	I0916 17:15:29.445357  114275 cache.go:194] Successfully downloaded all kic artifacts
	I0916 17:15:29.445402  114275 start.go:360] acquireMachinesLock for addons-539053: {Name:mk0043d3a6bacbded59cc72569f5719de0510390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:15:29.445511  114275 start.go:364] duration metric: took 87.72µs to acquireMachinesLock for "addons-539053"
	I0916 17:15:29.445569  114275 start.go:93] Provisioning new machine with config: &{Name:addons-539053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-539053 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:15:29.445675  114275 start.go:125] createHost starting for "" (driver="docker")
	I0916 17:15:29.447439  114275 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 17:15:29.447714  114275 start.go:159] libmachine.API.Create for "addons-539053" (driver="docker")
	I0916 17:15:29.447752  114275 client.go:168] LocalClient.Create starting
	I0916 17:15:29.447844  114275 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem
	I0916 17:15:29.655776  114275 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/cert.pem
	I0916 17:15:29.877092  114275 cli_runner.go:164] Run: docker network inspect addons-539053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 17:15:29.892920  114275 cli_runner.go:211] docker network inspect addons-539053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 17:15:29.893012  114275 network_create.go:284] running [docker network inspect addons-539053] to gather additional debugging logs...
	I0916 17:15:29.893037  114275 cli_runner.go:164] Run: docker network inspect addons-539053
	W0916 17:15:29.907959  114275 cli_runner.go:211] docker network inspect addons-539053 returned with exit code 1
	I0916 17:15:29.907995  114275 network_create.go:287] error running [docker network inspect addons-539053]: docker network inspect addons-539053: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-539053 not found
	I0916 17:15:29.908013  114275 network_create.go:289] output of [docker network inspect addons-539053]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-539053 not found
	
	** /stderr **
	I0916 17:15:29.908114  114275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 17:15:29.924362  114275 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00050aa40}
	I0916 17:15:29.924427  114275 network_create.go:124] attempt to create docker network addons-539053 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 17:15:29.924489  114275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-539053 addons-539053
	I0916 17:15:29.984627  114275 network_create.go:108] docker network addons-539053 192.168.49.0/24 created
	I0916 17:15:29.984668  114275 kic.go:121] calculated static IP "192.168.49.2" for the "addons-539053" container
	I0916 17:15:29.984738  114275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 17:15:29.999220  114275 cli_runner.go:164] Run: docker volume create addons-539053 --label name.minikube.sigs.k8s.io=addons-539053 --label created_by.minikube.sigs.k8s.io=true
	I0916 17:15:30.015617  114275 oci.go:103] Successfully created a docker volume addons-539053
	I0916 17:15:30.015684  114275 cli_runner.go:164] Run: docker run --rm --name addons-539053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-539053 --entrypoint /usr/bin/test -v addons-539053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib
	I0916 17:15:34.708232  114275 cli_runner.go:217] Completed: docker run --rm --name addons-539053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-539053 --entrypoint /usr/bin/test -v addons-539053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib: (4.692499232s)
	I0916 17:15:34.708261  114275 oci.go:107] Successfully prepared a docker volume addons-539053
	I0916 17:15:34.708287  114275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:15:34.708316  114275 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 17:15:34.708382  114275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-539053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 17:15:38.643736  114275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-539053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir: (3.935300228s)
	I0916 17:15:38.643778  114275 kic.go:203] duration metric: took 3.935457007s to extract preloaded images to volume ...
	W0916 17:15:38.643930  114275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 17:15:38.644070  114275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 17:15:38.686917  114275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-539053 --name addons-539053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-539053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-539053 --network addons-539053 --ip 192.168.49.2 --volume addons-539053:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc
	I0916 17:15:38.991613  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Running}}
	I0916 17:15:39.008107  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:39.024963  114275 cli_runner.go:164] Run: docker exec addons-539053 stat /var/lib/dpkg/alternatives/iptables
	I0916 17:15:39.065071  114275 oci.go:144] the created container "addons-539053" has a running status.
	I0916 17:15:39.065110  114275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa...
	I0916 17:15:39.249516  114275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 17:15:39.271435  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:39.289444  114275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 17:15:39.289467  114275 kic_runner.go:114] Args: [docker exec --privileged addons-539053 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 17:15:39.362118  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:39.380718  114275 machine.go:93] provisionDockerMachine start ...
	I0916 17:15:39.380814  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:39.398104  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:39.398318  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:39.398332  114275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 17:15:39.593218  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-539053
	
	I0916 17:15:39.593253  114275 ubuntu.go:169] provisioning hostname "addons-539053"
	I0916 17:15:39.593328  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:39.610327  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:39.610544  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:39.610561  114275 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-539053 && echo "addons-539053" | sudo tee /etc/hostname
	I0916 17:15:39.739666  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-539053
	
	I0916 17:15:39.739736  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:39.755442  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:39.755614  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:39.755631  114275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-539053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-539053/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-539053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 17:15:39.873921  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 17:15:39.873951  114275 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19649-105988/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-105988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-105988/.minikube}
	I0916 17:15:39.874010  114275 ubuntu.go:177] setting up certificates
	I0916 17:15:39.874022  114275 provision.go:84] configureAuth start
	I0916 17:15:39.874109  114275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-539053
	I0916 17:15:39.889215  114275 provision.go:143] copyHostCerts
	I0916 17:15:39.889291  114275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-105988/.minikube/cert.pem (1123 bytes)
	I0916 17:15:39.889407  114275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-105988/.minikube/key.pem (1675 bytes)
	I0916 17:15:39.889476  114275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-105988/.minikube/ca.pem (1078 bytes)
	I0916 17:15:39.889538  114275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-105988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca-key.pem org=jenkins.addons-539053 san=[127.0.0.1 192.168.49.2 addons-539053 localhost minikube]
	I0916 17:15:40.095398  114275 provision.go:177] copyRemoteCerts
	I0916 17:15:40.095478  114275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 17:15:40.095523  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:40.111120  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:40.198037  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 17:15:40.218857  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 17:15:40.238955  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 17:15:40.259024  114275 provision.go:87] duration metric: took 384.985147ms to configureAuth
	I0916 17:15:40.259053  114275 ubuntu.go:193] setting minikube options for container-runtime
	I0916 17:15:40.259222  114275 config.go:182] Loaded profile config "addons-539053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:15:40.259317  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:40.275037  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:40.275201  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:40.275213  114275 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 17:15:40.394040  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 17:15:40.394079  114275 ubuntu.go:71] root file system type: overlay
	I0916 17:15:40.394197  114275 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 17:15:40.394252  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:40.409896  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:40.410088  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:40.410149  114275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 17:15:40.540066  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 17:15:40.540164  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:40.556522  114275 main.go:141] libmachine: Using SSH client type: native
	I0916 17:15:40.556696  114275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 17:15:40.556717  114275 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 17:15:41.234799  114275 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-16 17:15:40.536278613 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 17:15:41.234831  114275 machine.go:96] duration metric: took 1.854093128s to provisionDockerMachine
	I0916 17:15:41.234843  114275 client.go:171] duration metric: took 11.787082498s to LocalClient.Create
	I0916 17:15:41.234858  114275 start.go:167] duration metric: took 11.787148132s to libmachine.API.Create "addons-539053"
	I0916 17:15:41.234866  114275 start.go:293] postStartSetup for "addons-539053" (driver="docker")
	I0916 17:15:41.234879  114275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 17:15:41.234948  114275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 17:15:41.235003  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:41.250233  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:41.338422  114275 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 17:15:41.341234  114275 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 17:15:41.341265  114275 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 17:15:41.341272  114275 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 17:15:41.341279  114275 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 17:15:41.341289  114275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-105988/.minikube/addons for local assets ...
	I0916 17:15:41.341344  114275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-105988/.minikube/files for local assets ...
	I0916 17:15:41.341366  114275 start.go:296] duration metric: took 106.493761ms for postStartSetup
	I0916 17:15:41.341620  114275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-539053
	I0916 17:15:41.357158  114275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/config.json ...
	I0916 17:15:41.357388  114275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:15:41.357426  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:41.373021  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:41.458463  114275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 17:15:41.462248  114275 start.go:128] duration metric: took 12.016554424s to createHost
	I0916 17:15:41.462276  114275 start.go:83] releasing machines lock for "addons-539053", held for 12.016750633s
	I0916 17:15:41.462345  114275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-539053
	I0916 17:15:41.478248  114275 ssh_runner.go:195] Run: cat /version.json
	I0916 17:15:41.478265  114275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 17:15:41.478311  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:41.478338  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:41.494965  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:41.496160  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:41.581676  114275 ssh_runner.go:195] Run: systemctl --version
	I0916 17:15:41.654494  114275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 17:15:41.658667  114275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 17:15:41.680057  114275 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 17:15:41.680126  114275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 17:15:41.703644  114275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 17:15:41.703674  114275 start.go:495] detecting cgroup driver to use...
	I0916 17:15:41.703709  114275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 17:15:41.703849  114275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:15:41.717531  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 17:15:41.725781  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 17:15:41.734006  114275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 17:15:41.734060  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 17:15:41.742226  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:15:41.750301  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 17:15:41.758277  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:15:41.766987  114275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 17:15:41.774622  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 17:15:41.782684  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 17:15:41.791199  114275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 17:15:41.799345  114275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 17:15:41.806310  114275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 17:15:41.813584  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:41.885354  114275 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 17:15:41.960023  114275 start.go:495] detecting cgroup driver to use...
	I0916 17:15:41.960067  114275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 17:15:41.960117  114275 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 17:15:41.971072  114275 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0916 17:15:41.971146  114275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 17:15:41.981404  114275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:15:41.995814  114275 ssh_runner.go:195] Run: which cri-dockerd
	I0916 17:15:41.999199  114275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 17:15:42.007331  114275 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 17:15:42.023272  114275 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 17:15:42.112084  114275 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 17:15:42.198006  114275 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 17:15:42.198168  114275 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 17:15:42.214915  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:42.291451  114275 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 17:15:42.531066  114275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 17:15:42.541254  114275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:15:42.551521  114275 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 17:15:42.625587  114275 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 17:15:42.701692  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:42.782106  114275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 17:15:42.793767  114275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:15:42.803012  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:42.882179  114275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 17:15:42.938157  114275 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 17:15:42.938255  114275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 17:15:42.941880  114275 start.go:563] Will wait 60s for crictl version
	I0916 17:15:42.941924  114275 ssh_runner.go:195] Run: which crictl
	I0916 17:15:42.945244  114275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 17:15:42.976942  114275 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 17:15:42.977011  114275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 17:15:42.998198  114275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 17:15:43.021935  114275 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 17:15:43.022007  114275 cli_runner.go:164] Run: docker network inspect addons-539053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 17:15:43.036769  114275 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 17:15:43.040009  114275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:15:43.049524  114275 kubeadm.go:883] updating cluster {Name:addons-539053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-539053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 17:15:43.049633  114275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:15:43.049679  114275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 17:15:43.067154  114275 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 17:15:43.067174  114275 docker.go:615] Images already preloaded, skipping extraction
	I0916 17:15:43.067253  114275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 17:15:43.085531  114275 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 17:15:43.085560  114275 cache_images.go:84] Images are preloaded, skipping loading
	I0916 17:15:43.085573  114275 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0916 17:15:43.085675  114275 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-539053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-539053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 17:15:43.085731  114275 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 17:15:43.128103  114275 cni.go:84] Creating CNI manager for ""
	I0916 17:15:43.128131  114275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:15:43.128144  114275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 17:15:43.128161  114275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-539053 NodeName:addons-539053 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 17:15:43.128286  114275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-539053"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 17:15:43.128340  114275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 17:15:43.136325  114275 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 17:15:43.136385  114275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 17:15:43.144805  114275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 17:15:43.160973  114275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 17:15:43.176775  114275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0916 17:15:43.191892  114275 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 17:15:43.194808  114275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:15:43.204082  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:43.279024  114275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:15:43.290750  114275 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053 for IP: 192.168.49.2
	I0916 17:15:43.290784  114275 certs.go:194] generating shared ca certs ...
	I0916 17:15:43.290808  114275 certs.go:226] acquiring lock for ca certs: {Name:mk8d7403e6a7d2260afa4bf6d78cd24d9849ff20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.290932  114275 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-105988/.minikube/ca.key
	I0916 17:15:43.455574  114275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt ...
	I0916 17:15:43.455603  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt: {Name:mk5421114da3e7f83dc89907491e68c2f01dfa63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.455767  114275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-105988/.minikube/ca.key ...
	I0916 17:15:43.455779  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/ca.key: {Name:mkdae9af3ae3b70f7f6d4ebd123324c2137abdc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.455845  114275 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.key
	I0916 17:15:43.533984  114275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.crt ...
	I0916 17:15:43.534013  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.crt: {Name:mk5c12e1021b71df2d793aadf8552022c180ce5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.534204  114275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.key ...
	I0916 17:15:43.534222  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.key: {Name:mk56ead1198144698658ff79b33e0e7c5d5c340d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.534290  114275 certs.go:256] generating profile certs ...
	I0916 17:15:43.534344  114275 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.key
	I0916 17:15:43.534365  114275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt with IP's: []
	I0916 17:15:43.667870  114275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt ...
	I0916 17:15:43.667903  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: {Name:mkef3e8be6dbea07877b7cbc06795313c4b44cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.668067  114275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.key ...
	I0916 17:15:43.668079  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.key: {Name:mkd134fec679436d855d1dbaa0ff9a6b3557b4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.668147  114275 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key.54337ae5
	I0916 17:15:43.668165  114275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt.54337ae5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 17:15:43.750084  114275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt.54337ae5 ...
	I0916 17:15:43.750114  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt.54337ae5: {Name:mka65a38a6c0b04201361159c9235ae7d0d926fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.750269  114275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key.54337ae5 ...
	I0916 17:15:43.750284  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key.54337ae5: {Name:mk8f6e3b9f7146e29b844a2b2482fd0196539177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.750351  114275 certs.go:381] copying /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt.54337ae5 -> /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt
	I0916 17:15:43.750428  114275 certs.go:385] copying /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key.54337ae5 -> /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key
	I0916 17:15:43.750472  114275 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.key
	I0916 17:15:43.750492  114275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.crt with IP's: []
	I0916 17:15:43.930096  114275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.crt ...
	I0916 17:15:43.930129  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.crt: {Name:mk656ff76055d13f8e411a076941f52e4cc3cdba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.930318  114275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.key ...
	I0916 17:15:43.930336  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.key: {Name:mka75d9bfe13d475c775e973b891986cf6d1c9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:43.930532  114275 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 17:15:43.930572  114275 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/ca.pem (1078 bytes)
	I0916 17:15:43.930593  114275 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/cert.pem (1123 bytes)
	I0916 17:15:43.930613  114275 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-105988/.minikube/certs/key.pem (1675 bytes)
	I0916 17:15:43.931250  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 17:15:43.952700  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 17:15:43.972419  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 17:15:43.992075  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 17:15:44.012068  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 17:15:44.031879  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 17:15:44.051587  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 17:15:44.071246  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 17:15:44.090512  114275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 17:15:44.113504  114275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 17:15:44.130398  114275 ssh_runner.go:195] Run: openssl version
	I0916 17:15:44.135421  114275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 17:15:44.143645  114275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:15:44.147073  114275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:15 /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:15:44.147126  114275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:15:44.153040  114275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 17:15:44.160928  114275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 17:15:44.163747  114275 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 17:15:44.163796  114275 kubeadm.go:392] StartCluster: {Name:addons-539053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-539053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:15:44.163888  114275 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 17:15:44.180442  114275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 17:15:44.187988  114275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 17:15:44.195441  114275 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 17:15:44.195486  114275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 17:15:44.202724  114275 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 17:15:44.202744  114275 kubeadm.go:157] found existing configuration files:
	
	I0916 17:15:44.202777  114275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 17:15:44.209868  114275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 17:15:44.209925  114275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 17:15:44.217008  114275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 17:15:44.224198  114275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 17:15:44.224244  114275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 17:15:44.231146  114275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 17:15:44.238037  114275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 17:15:44.238103  114275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 17:15:44.244957  114275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 17:15:44.252070  114275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 17:15:44.252105  114275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 17:15:44.258968  114275 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 17:15:44.292365  114275 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 17:15:44.292450  114275 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 17:15:44.312218  114275 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 17:15:44.312295  114275 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 17:15:44.312379  114275 kubeadm.go:310] OS: Linux
	I0916 17:15:44.312463  114275 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 17:15:44.312613  114275 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 17:15:44.312686  114275 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 17:15:44.312741  114275 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 17:15:44.312781  114275 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 17:15:44.312831  114275 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 17:15:44.312875  114275 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 17:15:44.312913  114275 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 17:15:44.312952  114275 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 17:15:44.359117  114275 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 17:15:44.359257  114275 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 17:15:44.359402  114275 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 17:15:44.369095  114275 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 17:15:44.372519  114275 out.go:235]   - Generating certificates and keys ...
	I0916 17:15:44.372618  114275 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 17:15:44.372683  114275 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 17:15:44.634719  114275 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 17:15:44.772291  114275 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 17:15:44.878962  114275 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 17:15:45.081025  114275 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 17:15:45.326768  114275 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 17:15:45.326916  114275 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-539053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 17:15:45.398207  114275 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 17:15:45.398361  114275 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-539053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 17:15:45.554448  114275 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 17:15:45.677749  114275 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 17:15:46.159478  114275 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 17:15:46.159566  114275 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 17:15:46.270056  114275 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 17:15:46.443872  114275 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 17:15:46.609834  114275 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 17:15:46.940547  114275 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 17:15:47.045487  114275 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 17:15:47.045918  114275 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 17:15:47.048354  114275 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 17:15:47.050458  114275 out.go:235]   - Booting up control plane ...
	I0916 17:15:47.050587  114275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 17:15:47.050713  114275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 17:15:47.050821  114275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 17:15:47.063109  114275 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 17:15:47.067948  114275 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 17:15:47.067999  114275 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 17:15:47.148566  114275 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 17:15:47.148695  114275 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 17:15:47.650026  114275 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.491673ms
	I0916 17:15:47.650155  114275 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 17:15:52.651336  114275 kubeadm.go:310] [api-check] The API server is healthy after 5.001297149s
	I0916 17:15:52.661564  114275 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 17:15:52.670975  114275 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 17:15:52.685748  114275 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 17:15:52.685954  114275 kubeadm.go:310] [mark-control-plane] Marking the node addons-539053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 17:15:52.693597  114275 kubeadm.go:310] [bootstrap-token] Using token: wkr91p.76u2qy72zpjh3bdw
	I0916 17:15:52.694945  114275 out.go:235]   - Configuring RBAC rules ...
	I0916 17:15:52.695086  114275 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 17:15:52.697509  114275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 17:15:52.703277  114275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 17:15:52.705339  114275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 17:15:52.707417  114275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 17:15:52.709383  114275 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 17:15:53.056713  114275 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 17:15:53.473315  114275 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 17:15:54.056470  114275 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 17:15:54.057228  114275 kubeadm.go:310] 
	I0916 17:15:54.057327  114275 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 17:15:54.057338  114275 kubeadm.go:310] 
	I0916 17:15:54.057449  114275 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 17:15:54.057459  114275 kubeadm.go:310] 
	I0916 17:15:54.057492  114275 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 17:15:54.057572  114275 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 17:15:54.057650  114275 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 17:15:54.057658  114275 kubeadm.go:310] 
	I0916 17:15:54.057733  114275 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 17:15:54.057742  114275 kubeadm.go:310] 
	I0916 17:15:54.057827  114275 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 17:15:54.057846  114275 kubeadm.go:310] 
	I0916 17:15:54.057915  114275 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 17:15:54.058023  114275 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 17:15:54.058153  114275 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 17:15:54.058165  114275 kubeadm.go:310] 
	I0916 17:15:54.058262  114275 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 17:15:54.058376  114275 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 17:15:54.058388  114275 kubeadm.go:310] 
	I0916 17:15:54.058527  114275 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wkr91p.76u2qy72zpjh3bdw \
	I0916 17:15:54.058657  114275 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:589aeae3f954a4cf51a4ace60c19de975422082f6bd32e26d54f799babcca0a2 \
	I0916 17:15:54.058688  114275 kubeadm.go:310] 	--control-plane 
	I0916 17:15:54.058699  114275 kubeadm.go:310] 
	I0916 17:15:54.058794  114275 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 17:15:54.058800  114275 kubeadm.go:310] 
	I0916 17:15:54.058918  114275 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wkr91p.76u2qy72zpjh3bdw \
	I0916 17:15:54.059069  114275 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:589aeae3f954a4cf51a4ace60c19de975422082f6bd32e26d54f799babcca0a2 
	I0916 17:15:54.061013  114275 kubeadm.go:310] W0916 17:15:44.289887    1917 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:15:54.061316  114275 kubeadm.go:310] W0916 17:15:44.290519    1917 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:15:54.061554  114275 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 17:15:54.061656  114275 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 17:15:54.061668  114275 cni.go:84] Creating CNI manager for ""
	I0916 17:15:54.061681  114275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:15:54.063277  114275 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 17:15:54.064301  114275 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 17:15:54.072623  114275 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 17:15:54.088071  114275 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 17:15:54.088182  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:54.088215  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-539053 minikube.k8s.io/updated_at=2024_09_16T17_15_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-539053 minikube.k8s.io/primary=true
	I0916 17:15:54.094865  114275 ops.go:34] apiserver oom_adj: -16
	I0916 17:15:54.166162  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:54.666977  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:55.166212  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:55.666245  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:56.166224  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:56.666834  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:57.167095  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:57.667002  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:58.166443  114275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:15:58.228536  114275 kubeadm.go:1113] duration metric: took 4.140400385s to wait for elevateKubeSystemPrivileges
	I0916 17:15:58.228574  114275 kubeadm.go:394] duration metric: took 14.064783803s to StartCluster
	I0916 17:15:58.228598  114275 settings.go:142] acquiring lock: {Name:mkb300bd78b7ad126a3cee4c0691e462d6a68687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:58.228737  114275 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:15:58.229162  114275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/kubeconfig: {Name:mkc274b48c835a365a47726fab379af89963f2b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:15:58.229356  114275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 17:15:58.229389  114275 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:15:58.229446  114275 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 17:15:58.229584  114275 addons.go:69] Setting yakd=true in profile "addons-539053"
	I0916 17:15:58.229606  114275 addons.go:234] Setting addon yakd=true in "addons-539053"
	I0916 17:15:58.229606  114275 addons.go:69] Setting inspektor-gadget=true in profile "addons-539053"
	I0916 17:15:58.229621  114275 addons.go:69] Setting storage-provisioner=true in profile "addons-539053"
	I0916 17:15:58.229652  114275 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-539053"
	I0916 17:15:58.229660  114275 addons.go:69] Setting default-storageclass=true in profile "addons-539053"
	I0916 17:15:58.229665  114275 addons.go:234] Setting addon storage-provisioner=true in "addons-539053"
	I0916 17:15:58.229671  114275 addons.go:69] Setting volcano=true in profile "addons-539053"
	I0916 17:15:58.229675  114275 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-539053"
	I0916 17:15:58.229678  114275 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-539053"
	I0916 17:15:58.229683  114275 addons.go:234] Setting addon volcano=true in "addons-539053"
	I0916 17:15:58.229686  114275 addons.go:69] Setting volumesnapshots=true in profile "addons-539053"
	I0916 17:15:58.229690  114275 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-539053"
	I0916 17:15:58.229699  114275 addons.go:234] Setting addon volumesnapshots=true in "addons-539053"
	I0916 17:15:58.229702  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229704  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229708  114275 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-539053"
	I0916 17:15:58.229711  114275 addons.go:69] Setting registry=true in profile "addons-539053"
	I0916 17:15:58.229724  114275 addons.go:234] Setting addon registry=true in "addons-539053"
	I0916 17:15:58.229730  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229730  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229743  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229621  114275 addons.go:69] Setting gcp-auth=true in profile "addons-539053"
	I0916 17:15:58.229849  114275 mustload.go:65] Loading cluster: addons-539053
	I0916 17:15:58.230004  114275 config.go:182] Loaded profile config "addons-539053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:15:58.230060  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230060  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230242  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230246  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230256  114275 addons.go:69] Setting metrics-server=true in profile "addons-539053"
	I0916 17:15:58.230260  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230271  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230272  114275 addons.go:234] Setting addon metrics-server=true in "addons-539053"
	I0916 17:15:58.230293  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.230303  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.230763  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.229643  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229638  114275 addons.go:234] Setting addon inspektor-gadget=true in "addons-539053"
	I0916 17:15:58.230861  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229643  114275 addons.go:69] Setting cloud-spanner=true in profile "addons-539053"
	I0916 17:15:58.231250  114275 addons.go:234] Setting addon cloud-spanner=true in "addons-539053"
	I0916 17:15:58.231280  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.229655  114275 addons.go:69] Setting ingress=true in profile "addons-539053"
	I0916 17:15:58.231313  114275 addons.go:234] Setting addon ingress=true in "addons-539053"
	I0916 17:15:58.231349  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.231359  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.229658  114275 addons.go:69] Setting ingress-dns=true in profile "addons-539053"
	I0916 17:15:58.231669  114275 addons.go:234] Setting addon ingress-dns=true in "addons-539053"
	I0916 17:15:58.231721  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.232204  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.232562  114275 out.go:177] * Verifying Kubernetes components...
	I0916 17:15:58.233076  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.229652  114275 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-539053"
	I0916 17:15:58.234062  114275 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-539053"
	I0916 17:15:58.234148  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.234764  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.229649  114275 addons.go:69] Setting helm-tiller=true in profile "addons-539053"
	I0916 17:15:58.235871  114275 addons.go:234] Setting addon helm-tiller=true in "addons-539053"
	I0916 17:15:58.235916  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.236055  114275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:15:58.229642  114275 config.go:182] Loaded profile config "addons-539053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:15:58.236396  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.231295  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.255569  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.256442  114275 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-539053"
	I0916 17:15:58.256481  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.256932  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.263129  114275 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 17:15:58.264549  114275 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 17:15:58.264573  114275 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 17:15:58.264637  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.274847  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.275411  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.277321  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 17:15:58.282003  114275 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 17:15:58.283351  114275 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:15:58.283377  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 17:15:58.283431  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.283836  114275 addons.go:234] Setting addon default-storageclass=true in "addons-539053"
	I0916 17:15:58.283888  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:15:58.284325  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:15:58.284770  114275 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 17:15:58.287192  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 17:15:58.287321  114275 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 17:15:58.288598  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 17:15:58.288730  114275 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 17:15:58.291234  114275 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:15:58.291260  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 17:15:58.291325  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.291569  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 17:15:58.292873  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 17:15:58.294231  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 17:15:58.295384  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 17:15:58.296507  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 17:15:58.297664  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 17:15:58.297682  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 17:15:58.297740  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.322820  114275 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 17:15:58.322984  114275 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 17:15:58.324497  114275 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 17:15:58.324543  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 17:15:58.324606  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.325002  114275 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 17:15:58.325021  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 17:15:58.325070  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.331717  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.332838  114275 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 17:15:58.333804  114275 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 17:15:58.333823  114275 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 17:15:58.333875  114275 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 17:15:58.335634  114275 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 17:15:58.335655  114275 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 17:15:58.335701  114275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 17:15:58.335721  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.335733  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 17:15:58.335758  114275 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 17:15:58.335822  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.335983  114275 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 17:15:58.336970  114275 out.go:177]   - Using image docker.io/busybox:stable
	I0916 17:15:58.338251  114275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:15:58.338351  114275 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 17:15:58.338366  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 17:15:58.338387  114275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:15:58.338404  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 17:15:58.338419  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.338462  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.340666  114275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:15:58.342274  114275 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:15:58.342300  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 17:15:58.342352  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.344325  114275 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 17:15:58.345656  114275 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 17:15:58.345672  114275 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 17:15:58.345724  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.345644  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.354231  114275 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 17:15:58.354253  114275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 17:15:58.354306  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.354646  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.366057  114275 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 17:15:58.367486  114275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:15:58.367506  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 17:15:58.367560  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.371128  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.386908  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.396288  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.400986  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.401388  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.401743  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.402394  114275 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 17:15:58.403659  114275 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:15:58.403676  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 17:15:58.403734  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:15:58.408380  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.410854  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.411213  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.412442  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.422554  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:15:58.423283  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	W0916 17:15:58.450239  114275 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 17:15:58.450335  114275 retry.go:31] will retry after 161.166086ms: ssh: handshake failed: EOF
	I0916 17:15:58.553333  114275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 17:15:58.553477  114275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:15:58.758279  114275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 17:15:58.758372  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 17:15:58.766927  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:15:58.854520  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:15:58.964121  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 17:15:58.964220  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 17:15:58.969170  114275 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 17:15:58.969191  114275 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 17:15:59.060112  114275 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 17:15:59.060154  114275 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 17:15:59.061575  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:15:59.062635  114275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 17:15:59.062660  114275 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 17:15:59.148947  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:15:59.151431  114275 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 17:15:59.151459  114275 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 17:15:59.152354  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:15:59.162092  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 17:15:59.168023  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 17:15:59.168047  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 17:15:59.248152  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 17:15:59.248284  114275 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 17:15:59.248296  114275 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 17:15:59.261070  114275 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 17:15:59.261094  114275 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 17:15:59.351018  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:15:59.356787  114275 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 17:15:59.356818  114275 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 17:15:59.358072  114275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 17:15:59.358101  114275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 17:15:59.364853  114275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:15:59.364877  114275 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 17:15:59.450119  114275 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:15:59.450204  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 17:15:59.548311  114275 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 17:15:59.548341  114275 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 17:15:59.556366  114275 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:15:59.556397  114275 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 17:15:59.567166  114275 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 17:15:59.567196  114275 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 17:15:59.570082  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:15:59.646733  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 17:15:59.646774  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 17:15:59.766463  114275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 17:15:59.766550  114275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 17:15:59.948954  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:16:00.156079  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:16:00.166651  114275 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.61313512s)
	I0916 17:16:00.167716  114275 node_ready.go:35] waiting up to 6m0s for node "addons-539053" to be "Ready" ...
	I0916 17:16:00.167980  114275 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.614608844s)
	I0916 17:16:00.168009  114275 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 17:16:00.170431  114275 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:16:00.170455  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 17:16:00.249698  114275 node_ready.go:49] node "addons-539053" has status "Ready":"True"
	I0916 17:16:00.249738  114275 node_ready.go:38] duration metric: took 81.979855ms for node "addons-539053" to be "Ready" ...
	I0916 17:16:00.249754  114275 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:16:00.260536  114275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:00.268953  114275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 17:16:00.269047  114275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 17:16:00.546856  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 17:16:00.546891  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 17:16:00.558539  114275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 17:16:00.558568  114275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 17:16:00.748688  114275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-539053" context rescaled to 1 replicas
	I0916 17:16:00.851126  114275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 17:16:00.851220  114275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 17:16:01.064374  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:16:01.252735  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 17:16:01.252822  114275 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 17:16:01.347799  114275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 17:16:01.347843  114275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 17:16:01.567646  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.800608497s)
	I0916 17:16:01.746613  114275 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 17:16:01.746760  114275 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 17:16:01.850264  114275 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:16:01.850303  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 17:16:01.961143  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 17:16:01.961188  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 17:16:02.155656  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:16:02.266640  114275 pod_ready.go:103] pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status "Ready":"False"
	I0916 17:16:02.356134  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 17:16:02.356226  114275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 17:16:02.667758  114275 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:16:02.667800  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 17:16:03.050023  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 17:16:03.050054  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 17:16:03.257779  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:16:03.261496  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 17:16:03.261649  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 17:16:04.066385  114275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:16:04.066487  114275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 17:16:04.161477  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.306911431s)
	I0916 17:16:04.161963  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.100341288s)
	I0916 17:16:04.349226  114275 pod_ready.go:103] pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status "Ready":"False"
	I0916 17:16:04.465727  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:16:05.263268  114275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 17:16:05.263421  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:16:05.283493  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:16:06.347763  114275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 17:16:06.658673  114275 addons.go:234] Setting addon gcp-auth=true in "addons-539053"
	I0916 17:16:06.658776  114275 host.go:66] Checking if "addons-539053" exists ...
	I0916 17:16:06.659323  114275 cli_runner.go:164] Run: docker container inspect addons-539053 --format={{.State.Status}}
	I0916 17:16:06.681496  114275 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 17:16:06.681547  114275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-539053
	I0916 17:16:06.696551  114275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/addons-539053/id_rsa Username:docker}
	I0916 17:16:06.768761  114275 pod_ready.go:103] pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status "Ready":"False"
	I0916 17:16:08.070980  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.921987188s)
	I0916 17:16:08.071131  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.822954297s)
	I0916 17:16:08.071150  114275 addons.go:475] Verifying addon ingress=true in "addons-539053"
	I0916 17:16:08.071048  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.918659237s)
	I0916 17:16:08.071093  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.908979389s)
	I0916 17:16:08.072488  114275 out.go:177] * Verifying ingress addon...
	I0916 17:16:08.074317  114275 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 17:16:08.150348  114275 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 17:16:08.150445  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:08.581151  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:09.153378  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:09.267620  114275 pod_ready.go:103] pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status "Ready":"False"
	I0916 17:16:09.653976  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:10.156408  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:10.651562  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:10.952109  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.601045056s)
	I0916 17:16:10.952402  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.796224349s)
	I0916 17:16:10.952460  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.887995671s)
	I0916 17:16:10.952555  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.796793939s)
	W0916 17:16:10.953229  114275 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:16:10.953270  114275 retry.go:31] will retry after 166.830445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:16:10.952631  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.694760678s)
	I0916 17:16:10.952785  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.003279335s)
	I0916 17:16:10.953348  114275 addons.go:475] Verifying addon registry=true in "addons-539053"
	I0916 17:16:10.952916  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.382148612s)
	I0916 17:16:10.953682  114275 addons.go:475] Verifying addon metrics-server=true in "addons-539053"
	I0916 17:16:10.955237  114275 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-539053 service yakd-dashboard -n yakd-dashboard
	
	I0916 17:16:10.956063  114275 out.go:177] * Verifying registry addon...
	I0916 17:16:10.958147  114275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 17:16:10.962460  114275 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 17:16:10.962479  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:11.120505  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:16:11.155489  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:11.463325  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:11.657125  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:11.768589  114275 pod_ready.go:98] pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:16:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 17:15:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:16:02 +0000 UTC,FinishedAt:2024-09-16 17:16:09 +0000 UTC,ContainerID:docker://532fd6f586dd17aef593b85bab83ad072e43ac5788f8b0d8e0c848392d3fb04e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://532fd6f586dd17aef593b85bab83ad072e43ac5788f8b0d8e0c848392d3fb04e Started:0xc00227c1e0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00223eb50} {Name:kube-api-access-x7lq2 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc00223eb60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:16:11.768632  114275 pod_ready.go:82] duration metric: took 11.507969529s for pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace to be "Ready" ...
	E0916 17:16:11.768647  114275 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-sx2j8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:16:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:15:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 17:15:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:16:02 +0000 UTC,FinishedAt:2024-09-16 17:16:09 +0000 UTC,ContainerID:docker://532fd6f586dd17aef593b85bab83ad072e43ac5788f8b0d8e0c848392d3fb04e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://532fd6f586dd17aef593b85bab83ad072e43ac5788f8b0d8e0c848392d3fb04e Started:0xc00227c1e0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00223eb50} {Name:kube-api-access-x7lq2 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00223eb60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:16:11.768659  114275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wnhjq" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.850327  114275 pod_ready.go:93] pod "coredns-7c65d6cfc9-wnhjq" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:11.850360  114275 pod_ready.go:82] duration metric: took 81.69113ms for pod "coredns-7c65d6cfc9-wnhjq" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.850374  114275 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.856131  114275 pod_ready.go:93] pod "etcd-addons-539053" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:11.856161  114275 pod_ready.go:82] duration metric: took 5.778739ms for pod "etcd-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.856189  114275 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.864310  114275 pod_ready.go:93] pod "kube-apiserver-addons-539053" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:11.864396  114275 pod_ready.go:82] duration metric: took 8.195217ms for pod "kube-apiserver-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.864427  114275 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.949910  114275 pod_ready.go:93] pod "kube-controller-manager-addons-539053" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:11.950027  114275 pod_ready.go:82] duration metric: took 85.576621ms for pod "kube-controller-manager-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.950092  114275 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbn89" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:11.962653  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:11.973866  114275 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.292343012s)
	I0916 17:16:11.974099  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.508031302s)
	I0916 17:16:11.974144  114275 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-539053"
	I0916 17:16:11.976300  114275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:16:11.976310  114275 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 17:16:11.977780  114275 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 17:16:11.978646  114275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 17:16:11.978885  114275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 17:16:11.978901  114275 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 17:16:12.050785  114275 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 17:16:12.050815  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:12.149845  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:12.152680  114275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 17:16:12.152703  114275 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 17:16:12.164823  114275 pod_ready.go:93] pod "kube-proxy-bbn89" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:12.164850  114275 pod_ready.go:82] duration metric: took 214.728667ms for pod "kube-proxy-bbn89" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:12.164863  114275 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:12.178787  114275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:16:12.178812  114275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 17:16:12.265022  114275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:16:12.462212  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:12.564283  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:12.565231  114275 pod_ready.go:93] pod "kube-scheduler-addons-539053" in "kube-system" namespace has status "Ready":"True"
	I0916 17:16:12.565256  114275 pod_ready.go:82] duration metric: took 400.384397ms for pod "kube-scheduler-addons-539053" in "kube-system" namespace to be "Ready" ...
	I0916 17:16:12.565267  114275 pod_ready.go:39] duration metric: took 12.315483377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:16:12.565292  114275 api_server.go:52] waiting for apiserver process to appear ...
	I0916 17:16:12.565358  114275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:16:12.579458  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:12.963319  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:13.050480  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:13.149704  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:13.462760  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:13.563585  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:13.656691  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.536136379s)
	I0916 17:16:13.658384  114275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.393321536s)
	I0916 17:16:13.658427  114275 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.09304533s)
	I0916 17:16:13.658467  114275 api_server.go:72] duration metric: took 15.429047794s to wait for apiserver process to appear ...
	I0916 17:16:13.658477  114275 api_server.go:88] waiting for apiserver healthz status ...
	I0916 17:16:13.658501  114275 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 17:16:13.660310  114275 addons.go:475] Verifying addon gcp-auth=true in "addons-539053"
	I0916 17:16:13.663254  114275 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 17:16:13.663435  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:13.664089  114275 out.go:177] * Verifying gcp-auth addon...
	I0916 17:16:13.664335  114275 api_server.go:141] control plane version: v1.31.1
	I0916 17:16:13.664413  114275 api_server.go:131] duration metric: took 5.926461ms to wait for apiserver health ...
	I0916 17:16:13.664427  114275 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 17:16:13.666361  114275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 17:16:13.762776  114275 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:16:13.769572  114275 system_pods.go:59] 18 kube-system pods found
	I0916 17:16:13.769608  114275 system_pods.go:61] "coredns-7c65d6cfc9-wnhjq" [4245cb95-20b3-46fb-aed8-179d0f82e5d7] Running
	I0916 17:16:13.769621  114275 system_pods.go:61] "csi-hostpath-attacher-0" [d988b7d5-120c-40e5-81c7-be8d1ed5f1ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:16:13.769632  114275 system_pods.go:61] "csi-hostpath-resizer-0" [a718b743-e8df-48b9-9179-a2d18a674210] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:16:13.769641  114275 system_pods.go:61] "csi-hostpathplugin-z7svh" [e382009f-305c-4940-934c-eefa26c102c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:16:13.769651  114275 system_pods.go:61] "etcd-addons-539053" [e815e4d8-c792-4ee9-b17b-ea384470c094] Running
	I0916 17:16:13.769657  114275 system_pods.go:61] "kube-apiserver-addons-539053" [978344af-4506-4d7b-904f-45e4d181fb39] Running
	I0916 17:16:13.769667  114275 system_pods.go:61] "kube-controller-manager-addons-539053" [c4755a4d-9e2b-49f4-ad5c-4da6a55139a6] Running
	I0916 17:16:13.769679  114275 system_pods.go:61] "kube-ingress-dns-minikube" [cf1503c1-0d6b-4055-b323-9deca4c25d13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 17:16:13.769687  114275 system_pods.go:61] "kube-proxy-bbn89" [e069094e-be35-4245-9ad9-c15c0632aaf3] Running
	I0916 17:16:13.769694  114275 system_pods.go:61] "kube-scheduler-addons-539053" [96e3dac4-5f35-4db3-ac7e-9f02a07dd492] Running
	I0916 17:16:13.769705  114275 system_pods.go:61] "metrics-server-84c5f94fbc-26pvn" [27fac5e6-437c-408b-b822-b4e2b393d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:16:13.769719  114275 system_pods.go:61] "nvidia-device-plugin-daemonset-vz86s" [aff62fc3-161b-49c5-9c01-0794bb7a44ae] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 17:16:13.769732  114275 system_pods.go:61] "registry-66c9cd494c-6df5h" [4849ea19-88f6-4fbc-ba0f-e290ee2d0d80] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:16:13.769743  114275 system_pods.go:61] "registry-proxy-9tc94" [556af332-2257-4db0-adcb-aca469cf992d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:16:13.769755  114275 system_pods.go:61] "snapshot-controller-56fcc65765-mj4qq" [6755c5e5-4ee2-41ef-b33b-721fa04ab9b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:16:13.769768  114275 system_pods.go:61] "snapshot-controller-56fcc65765-mj66x" [88fabcb5-0d8c-41ba-8ebe-b7267c1c5381] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:16:13.769775  114275 system_pods.go:61] "storage-provisioner" [25007d02-d9ce-4f49-b276-c7bd60bf81eb] Running
	I0916 17:16:13.769784  114275 system_pods.go:61] "tiller-deploy-b48cc5f79-42m8q" [f1e7c66d-de05-47ef-b306-073ce6ee059d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:16:13.769794  114275 system_pods.go:74] duration metric: took 105.359796ms to wait for pod list to return data ...
	I0916 17:16:13.769807  114275 default_sa.go:34] waiting for default service account to be created ...
	I0916 17:16:13.772253  114275 default_sa.go:45] found service account: "default"
	I0916 17:16:13.772276  114275 default_sa.go:55] duration metric: took 2.46165ms for default service account to be created ...
	I0916 17:16:13.772286  114275 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 17:16:13.781245  114275 system_pods.go:86] 18 kube-system pods found
	I0916 17:16:13.781277  114275 system_pods.go:89] "coredns-7c65d6cfc9-wnhjq" [4245cb95-20b3-46fb-aed8-179d0f82e5d7] Running
	I0916 17:16:13.781290  114275 system_pods.go:89] "csi-hostpath-attacher-0" [d988b7d5-120c-40e5-81c7-be8d1ed5f1ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:16:13.781300  114275 system_pods.go:89] "csi-hostpath-resizer-0" [a718b743-e8df-48b9-9179-a2d18a674210] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:16:13.781313  114275 system_pods.go:89] "csi-hostpathplugin-z7svh" [e382009f-305c-4940-934c-eefa26c102c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:16:13.781327  114275 system_pods.go:89] "etcd-addons-539053" [e815e4d8-c792-4ee9-b17b-ea384470c094] Running
	I0916 17:16:13.781336  114275 system_pods.go:89] "kube-apiserver-addons-539053" [978344af-4506-4d7b-904f-45e4d181fb39] Running
	I0916 17:16:13.781347  114275 system_pods.go:89] "kube-controller-manager-addons-539053" [c4755a4d-9e2b-49f4-ad5c-4da6a55139a6] Running
	I0916 17:16:13.781357  114275 system_pods.go:89] "kube-ingress-dns-minikube" [cf1503c1-0d6b-4055-b323-9deca4c25d13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 17:16:13.781364  114275 system_pods.go:89] "kube-proxy-bbn89" [e069094e-be35-4245-9ad9-c15c0632aaf3] Running
	I0916 17:16:13.781374  114275 system_pods.go:89] "kube-scheduler-addons-539053" [96e3dac4-5f35-4db3-ac7e-9f02a07dd492] Running
	I0916 17:16:13.781384  114275 system_pods.go:89] "metrics-server-84c5f94fbc-26pvn" [27fac5e6-437c-408b-b822-b4e2b393d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:16:13.781398  114275 system_pods.go:89] "nvidia-device-plugin-daemonset-vz86s" [aff62fc3-161b-49c5-9c01-0794bb7a44ae] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 17:16:13.781408  114275 system_pods.go:89] "registry-66c9cd494c-6df5h" [4849ea19-88f6-4fbc-ba0f-e290ee2d0d80] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:16:13.781421  114275 system_pods.go:89] "registry-proxy-9tc94" [556af332-2257-4db0-adcb-aca469cf992d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:16:13.781433  114275 system_pods.go:89] "snapshot-controller-56fcc65765-mj4qq" [6755c5e5-4ee2-41ef-b33b-721fa04ab9b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:16:13.781448  114275 system_pods.go:89] "snapshot-controller-56fcc65765-mj66x" [88fabcb5-0d8c-41ba-8ebe-b7267c1c5381] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:16:13.781459  114275 system_pods.go:89] "storage-provisioner" [25007d02-d9ce-4f49-b276-c7bd60bf81eb] Running
	I0916 17:16:13.781469  114275 system_pods.go:89] "tiller-deploy-b48cc5f79-42m8q" [f1e7c66d-de05-47ef-b306-073ce6ee059d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:16:13.781482  114275 system_pods.go:126] duration metric: took 9.188191ms to wait for k8s-apps to be running ...
	I0916 17:16:13.781496  114275 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 17:16:13.781551  114275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:16:13.794622  114275 system_svc.go:56] duration metric: took 13.119825ms WaitForService to wait for kubelet
	I0916 17:16:13.794654  114275 kubeadm.go:582] duration metric: took 15.565233811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:16:13.794677  114275 node_conditions.go:102] verifying NodePressure condition ...
	I0916 17:16:13.797505  114275 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 17:16:13.797533  114275 node_conditions.go:123] node cpu capacity is 8
	I0916 17:16:13.797551  114275 node_conditions.go:105] duration metric: took 2.866224ms to run NodePressure ...
	I0916 17:16:13.797565  114275 start.go:241] waiting for startup goroutines ...
	I0916 17:16:13.962231  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:13.983769  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:14.078184  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:14.462907  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:14.482993  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:14.578883  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:14.962454  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:14.982721  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:15.078613  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:15.462247  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:15.483254  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:15.579298  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:15.963102  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:15.983705  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:16.078527  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:16.462749  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:16.482791  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:16.578398  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:16.961893  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:16.983869  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:17.078542  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:17.462394  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:17.482341  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:17.633183  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:17.962438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:17.983370  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:18.079311  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:18.461448  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:18.483104  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:18.578692  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:18.962473  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:18.982714  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:19.077998  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:19.461850  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:19.482540  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:19.577871  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:19.961691  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:19.983013  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:20.077967  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:20.461806  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:20.482839  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:20.578336  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:20.962540  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:20.982235  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:21.078036  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:21.462499  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:21.482741  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:21.578348  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:21.962481  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:21.982720  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:22.078546  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:22.462139  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:22.483569  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:22.578227  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:22.962577  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:22.982899  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:23.079097  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:23.462926  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:23.483727  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:23.578281  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:23.962954  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:24.065416  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:24.078467  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:24.462536  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:24.482359  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:24.578733  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:24.962481  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:24.982250  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:25.077537  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:25.461543  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:25.483276  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:25.578269  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:25.962034  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:25.983175  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:26.078490  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:26.462196  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:26.482697  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:26.578314  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:26.961872  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:26.983177  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:27.078475  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:27.461746  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:27.482528  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:27.577890  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:27.960966  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:27.982914  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:28.078060  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:28.462256  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:28.482875  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:28.578198  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:28.962781  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:28.982936  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:29.078505  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:29.462795  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:29.482915  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:29.578230  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:29.961807  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:29.982679  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:30.078050  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:30.461972  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:30.482603  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:30.578304  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:30.961797  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:30.982757  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:31.078272  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:31.462725  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:31.482708  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:31.579087  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:31.962334  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:31.982975  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:32.078703  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:32.462330  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:32.522551  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:32.577946  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:32.961933  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:32.983020  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:33.078451  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:33.462446  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:33.482265  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:33.577480  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:33.961946  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:33.982932  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:34.078181  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:34.462080  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:34.483334  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:34.577669  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:34.961639  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:34.982507  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:35.078017  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:35.462721  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:35.483140  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:35.579193  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:35.961303  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:35.982714  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:36.078023  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:36.461970  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:36.482637  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:36.579530  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:36.961899  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:36.983597  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:37.078895  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:37.462454  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:37.482726  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:37.578817  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:37.962696  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:38.064502  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:38.077685  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:38.462019  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:38.482872  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:38.578282  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:38.961734  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:38.982825  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:39.078754  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:39.461434  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:39.482191  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:39.578582  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:39.963044  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:39.983703  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:40.078868  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:40.462644  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:40.483584  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:40.579087  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:40.962620  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:40.983108  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:41.078442  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:41.462575  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:41.600507  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:41.600580  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:41.961866  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:41.983128  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:42.078733  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:42.462257  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:42.483467  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:42.578935  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:42.961920  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:42.982878  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:43.078249  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:43.461281  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:43.482534  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:43.578704  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:43.962133  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:43.982816  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:44.078952  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:44.461773  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:44.482928  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:44.578290  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:44.961576  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:44.982226  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:45.078709  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:45.462385  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:45.482569  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:45.578527  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:45.962565  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:45.983014  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:46.078220  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:46.461863  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:46.482907  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:46.578400  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:46.961016  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:46.983007  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:47.078003  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:47.462179  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:47.482384  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:47.579372  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:47.962438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:47.983133  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:48.078864  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:48.462142  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:48.483698  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:48.579883  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:48.962432  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:48.982597  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:49.078722  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:49.462503  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:49.483067  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:49.579032  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:49.961929  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:49.983467  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:50.079161  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:50.462327  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:50.482527  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:50.577912  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:50.961721  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:50.982654  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:51.078046  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:51.462333  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:16:51.482273  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:51.577803  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:51.962011  114275 kapi.go:107] duration metric: took 41.003859416s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 17:16:51.982779  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:52.078054  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:52.483652  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:52.579398  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:52.983507  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:53.084011  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:53.483573  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:53.579076  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:53.983352  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:54.077879  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:54.482186  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:54.578566  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:54.983013  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:55.083118  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:55.484598  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:55.578997  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:55.982754  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:56.078302  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:56.483440  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:56.578810  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:56.983348  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:57.078155  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:57.483166  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:57.578713  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:57.984333  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:58.078378  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:58.483254  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:58.577735  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:58.982551  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:59.078195  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:59.482582  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:16:59.577927  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:16:59.983426  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:00.077763  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:00.483871  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:00.579028  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:00.983566  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:01.078431  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:01.483533  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:01.577992  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:01.983578  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:02.079330  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:02.484505  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:02.584228  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:02.983438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:03.078542  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:03.483110  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:03.578667  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:03.983322  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:04.078748  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:04.551846  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:04.578478  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:04.984099  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:05.078967  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:05.483821  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:05.578442  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:05.985824  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:06.085274  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:06.482915  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:06.578286  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:06.982801  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:07.078413  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:07.482349  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:07.578108  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:07.983395  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:08.078523  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:08.483979  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:08.578727  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:08.983956  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:09.078635  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:09.483716  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:09.603657  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:09.982861  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:10.082925  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:10.482296  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:10.578543  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:10.983226  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:11.078143  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:11.482678  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:11.578699  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:11.983564  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:12.079092  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:12.485317  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:12.579231  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:12.983732  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:13.078565  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:13.482520  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:13.577976  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:13.982645  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:14.077861  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:14.482278  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:14.578004  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:14.984166  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:15.078857  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:15.484298  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:15.579403  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:15.983870  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:16.078667  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:16.483104  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:16.579286  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:16.983828  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:17.084323  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:17.483361  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:17.582010  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:17.984228  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:18.084269  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:18.483634  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:18.579191  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:18.982859  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:19.083388  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:19.483356  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:19.578825  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:19.983578  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:20.078244  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:20.484596  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:20.578834  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:20.983817  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:21.078032  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:21.483335  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:21.578994  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:21.982543  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:22.078223  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:22.483954  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:22.583983  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:22.984968  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:23.078718  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:23.482514  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:23.578451  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:23.984093  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:24.078647  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:24.483481  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:24.579178  114275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:17:25.049566  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:25.078409  114275 kapi.go:107] duration metric: took 1m17.004084413s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 17:17:25.482991  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:25.984150  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:26.484246  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:26.983927  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:27.483517  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:27.983590  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:28.483749  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:28.984014  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:29.484311  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:29.982884  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:17:30.482601  114275 kapi.go:107] duration metric: took 1m18.503954795s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 17:17:36.170512  114275 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:17:36.170534  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:36.670451  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:37.169058  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:37.670186  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:38.170398  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:38.669314  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:39.169063  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:39.670386  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:40.170198  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:40.669698  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:41.169816  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:41.669990  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:42.169931  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:42.670426  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:43.170255  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:43.670136  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:44.169916  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:44.669597  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:45.169475  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:45.669370  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:46.170232  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:46.670224  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:47.169031  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:47.669746  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:48.169803  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:48.669903  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:49.169778  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:49.669391  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:50.170208  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:50.669788  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:51.169686  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:51.669769  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:52.170444  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:52.669820  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:53.169611  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:53.669659  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:54.169947  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:54.669741  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:55.169852  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:55.669746  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:56.169505  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:56.669820  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:57.169896  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:57.669972  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:58.170510  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:58.669999  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:59.169671  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:17:59.669379  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:00.170140  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:00.670006  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:01.169955  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:01.670342  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:02.170661  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:02.670338  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:03.169750  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:03.669809  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:04.169848  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:04.669541  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:05.169351  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:05.669743  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:06.170140  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:06.669947  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:07.169977  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:07.670273  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:08.170092  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:08.669606  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:09.169457  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:09.669304  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:10.170261  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:10.669805  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:11.170187  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:11.670019  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:12.170333  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:12.670156  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:13.169866  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:13.669707  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:14.170129  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:14.669902  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:15.169837  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:15.669876  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:16.170225  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:16.670491  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:17.169454  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:17.670329  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:18.170496  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:18.669562  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:19.169325  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:19.669701  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:20.170144  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:20.669831  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:21.169779  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:21.669952  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:22.170438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:22.670438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:23.170005  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:23.669882  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:24.170219  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:24.669951  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:25.169767  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:25.669655  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:26.169876  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:26.670255  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:27.169755  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:27.669516  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:28.169738  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:28.671547  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:29.169462  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:29.669294  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:30.170113  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:30.669791  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:31.169850  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:31.670060  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:32.170436  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:32.671115  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:33.169976  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:33.670051  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:34.170441  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:34.669849  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:35.169722  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:35.669675  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:36.169772  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:36.669697  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:37.170280  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:37.670298  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:38.170153  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:38.669055  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:39.169981  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:39.670187  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:40.170198  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:40.670164  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:41.170535  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:41.670437  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:42.170472  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:42.670438  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:43.170014  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:43.697546  114275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:18:44.169429  114275 kapi.go:107] duration metric: took 2m30.50306224s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 17:18:44.171023  114275 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-539053 cluster.
	I0916 17:18:44.172480  114275 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 17:18:44.173733  114275 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 17:18:44.175110  114275 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, default-storageclass, volcano, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 17:18:44.176398  114275 addons.go:510] duration metric: took 2m45.946962174s for enable addons: enabled=[ingress-dns storage-provisioner storage-provisioner-rancher nvidia-device-plugin cloud-spanner default-storageclass volcano helm-tiller inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 17:18:44.176438  114275 start.go:246] waiting for cluster config update ...
	I0916 17:18:44.176461  114275 start.go:255] writing updated cluster config ...
	I0916 17:18:44.176757  114275 ssh_runner.go:195] Run: rm -f paused
	I0916 17:18:44.228658  114275 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 17:18:44.230455  114275 out.go:177] * Done! kubectl is now configured to use "addons-539053" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 17:28:24 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '47129abf88469d14af7141e84fab19c6666750b545c5b0631f83ea9c8e5b2880'"
	Sep 16 17:28:24 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:24Z" level=error msg="error getting RW layer size for container ID 'f1e92bed97f1ae8e3f53c54b614cdf5392b2a2c708713ccca5cce82d871ab399': Error response from daemon: No such container: f1e92bed97f1ae8e3f53c54b614cdf5392b2a2c708713ccca5cce82d871ab399"
	Sep 16 17:28:24 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f1e92bed97f1ae8e3f53c54b614cdf5392b2a2c708713ccca5cce82d871ab399'"
	Sep 16 17:28:24 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:24Z" level=error msg="error getting RW layer size for container ID '0505e3bc339fa5be4a8f2a0ebc44e15018e7f7c97fc9cb77f62c767415a17180': Error response from daemon: No such container: 0505e3bc339fa5be4a8f2a0ebc44e15018e7f7c97fc9cb77f62c767415a17180"
	Sep 16 17:28:24 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0505e3bc339fa5be4a8f2a0ebc44e15018e7f7c97fc9cb77f62c767415a17180'"
	Sep 16 17:28:26 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df3cb3216e7b2532f5419de5758bfe5fde68056c58729526d4e36a7632ed9e12/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 17:28:26 addons-539053 dockerd[1335]: time="2024-09-16T17:28:26.596370094Z" level=warning msg="reference for unknown type: " digest="sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c" remote="ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c"
	Sep 16 17:28:29 addons-539053 dockerd[1335]: time="2024-09-16T17:28:29.068655935Z" level=info msg="ignoring event" container=c87b579f39f03e7dd74cfc8e98f21b02379d5cfd0ce59bd821e06fef7ff09593 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:29 addons-539053 dockerd[1335]: time="2024-09-16T17:28:29.071267133Z" level=info msg="ignoring event" container=a70c2f13c30f71ed58c17c69e4854d55498ffcf49d9c747b111cc48547531455 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:29 addons-539053 dockerd[1335]: time="2024-09-16T17:28:29.288638061Z" level=info msg="ignoring event" container=c49ad22424f957b4df1600a3763506700b034d1a7df6d3367b20ec6b19e553c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:29 addons-539053 dockerd[1335]: time="2024-09-16T17:28:29.295676627Z" level=info msg="ignoring event" container=b1940c3dced086f5673e7a3c6bf5b0497bbceaee3fda51e05be7517f4c947079 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:32 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e229c2504e4309757a439a5e4eda9ae6e9f4ba13849bff39acc1c7c23e7e0ed/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 17:28:32 addons-539053 dockerd[1335]: time="2024-09-16T17:28:32.208660805Z" level=info msg="ignoring event" container=9843ad7bd148dabe17e3aef84e11dbe3c3fa57d24e0c80f876959e3b8c41cfcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:32 addons-539053 dockerd[1335]: time="2024-09-16T17:28:32.276042145Z" level=info msg="ignoring event" container=2a1c852267cc02ee23be14874ce5fb8e623d5957c7bfcd42ba5f297bb4e2c74b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:34 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:34Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.1@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c"
	Sep 16 17:28:37 addons-539053 dockerd[1335]: time="2024-09-16T17:28:37.091903555Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=076d4be4b58e4948fa5151c3c46abe6d974fb02a9bc025801258f081bab349f3
	Sep 16 17:28:37 addons-539053 dockerd[1335]: time="2024-09-16T17:28:37.147524879Z" level=info msg="ignoring event" container=076d4be4b58e4948fa5151c3c46abe6d974fb02a9bc025801258f081bab349f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:37 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-bc57996ff-j5nxx_ingress-nginx\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 17:28:37 addons-539053 dockerd[1335]: time="2024-09-16T17:28:37.281181532Z" level=info msg="ignoring event" container=7a956fa4d115d515dde94d8f62f48e2a1e8a0c8c6efc7a9896c5b2ae0bfca5b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:37 addons-539053 cri-dockerd[1599]: time="2024-09-16T17:28:37Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 16 17:28:39 addons-539053 dockerd[1335]: time="2024-09-16T17:28:39.519845873Z" level=info msg="ignoring event" container=05d9b8b1d2b9137352fd9da6e9b7a625db99434d827fe5f557e938c1e70c2e90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:39 addons-539053 dockerd[1335]: time="2024-09-16T17:28:39.983094618Z" level=info msg="ignoring event" container=dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:40 addons-539053 dockerd[1335]: time="2024-09-16T17:28:40.061304860Z" level=info msg="ignoring event" container=4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:40 addons-539053 dockerd[1335]: time="2024-09-16T17:28:40.183622592Z" level=info msg="ignoring event" container=6ac17af3097bd415c9ad178449ea72399d4d8b6fe7e470e48b30da8e4f2409e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:28:40 addons-539053 dockerd[1335]: time="2024-09-16T17:28:40.213768904Z" level=info msg="ignoring event" container=f9faf95fd60475bffcd8bf735ab1d2d30c99e5e22ae3b3c41b84fa0abc109e18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef034ee4d4d32       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  3 seconds ago       Running             hello-world-app           0                   2e229c2504e43       hello-world-app-55bf9c44b4-t96vs
	42c9065ab8bf9       ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c                        6 seconds ago       Running             headlamp                  0                   df3cb3216e7b2       headlamp-7b5c95b59d-fbdl8
	a5d4a38ea5d8d       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                16 seconds ago      Running             nginx                     0                   a219ddee24cca       nginx
	1d005c612eb5e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   4b8b65c562239       gcp-auth-89d5ffd79-g5vjr
	ff7e55018fc5c       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   f00bf5e9346c1       ingress-nginx-admission-patch-rjmgh
	3978b6e47c997       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   6ac7fa646745c       ingress-nginx-admission-create-c9s92
	81f5330bbecca       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   6021c44abb111       storage-provisioner
	1364b0b4b1295       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   336f91b8e8668       coredns-7c65d6cfc9-wnhjq
	464a6082c92d2       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   df8f8d4ee5ab8       kube-proxy-bbn89
	4bb47611cde2c       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   1d45d71be388f       etcd-addons-539053
	71598060e1539       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   a517be7aece40       kube-scheduler-addons-539053
	adee2fab665b0       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   4fd4c56b95406       kube-apiserver-addons-539053
	ddf08198399f9       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   74b7a82ac60d5       kube-controller-manager-addons-539053
	
	
	==> coredns [1364b0b4b129] <==
	[INFO] 10.244.0.22:57420 - 45886 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007515328s
	[INFO] 10.244.0.22:38041 - 34930 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003137688s
	[INFO] 10.244.0.22:44061 - 46554 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007263598s
	[INFO] 10.244.0.22:47473 - 48555 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007810311s
	[INFO] 10.244.0.22:49691 - 43923 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00766984s
	[INFO] 10.244.0.22:57420 - 62481 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005043512s
	[INFO] 10.244.0.22:58098 - 18081 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007797378s
	[INFO] 10.244.0.22:35638 - 26250 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007750623s
	[INFO] 10.244.0.22:40013 - 5118 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005231309s
	[INFO] 10.244.0.22:40013 - 54757 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.055780546s
	[INFO] 10.244.0.22:57420 - 54355 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.056908987s
	[INFO] 10.244.0.22:40013 - 16750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075086s
	[INFO] 10.244.0.22:57420 - 9350 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081444s
	[INFO] 10.244.0.22:49691 - 14661 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003192004s
	[INFO] 10.244.0.22:44061 - 50318 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005302067s
	[INFO] 10.244.0.22:35638 - 22487 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005333814s
	[INFO] 10.244.0.22:38041 - 5951 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.060976887s
	[INFO] 10.244.0.22:47473 - 65406 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00410309s
	[INFO] 10.244.0.22:38041 - 215 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059408s
	[INFO] 10.244.0.22:35638 - 57010 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075901s
	[INFO] 10.244.0.22:44061 - 47572 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102401s
	[INFO] 10.244.0.22:49691 - 50644 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088036s
	[INFO] 10.244.0.22:58098 - 43549 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003215002s
	[INFO] 10.244.0.22:47473 - 9005 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078792s
	[INFO] 10.244.0.22:58098 - 921 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065791s
	
	
	==> describe nodes <==
	Name:               addons-539053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-539053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=addons-539053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T17_15_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-539053
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-539053
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:28:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:27:58 +0000   Mon, 16 Sep 2024 17:15:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:27:58 +0000   Mon, 16 Sep 2024 17:15:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:27:58 +0000   Mon, 16 Sep 2024 17:15:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:27:58 +0000   Mon, 16 Sep 2024 17:15:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-539053
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1bbe9e4b66c429aa7dafe493aa619a6
	  System UUID:                064c562e-413e-4af6-ba4a-4df004b28d4d
	  Boot ID:                    606f120e-2bee-42b2-a3a5-24f53b1f28a3
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     hello-world-app-55bf9c44b4-t96vs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  gcp-auth                    gcp-auth-89d5ffd79-g5vjr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-7b5c95b59d-fbdl8                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-7c65d6cfc9-wnhjq                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-539053                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-539053             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-539053    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bbn89                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-539053             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-539053 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-539053 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-539053 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-539053 event: Registered Node addons-539053 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e e5 48 0a fd 02 08 06
	[  +8.598564] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 8e ad 9e 67 2a 08 06
	[  +1.081096] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 31 79 46 37 3b 08 06
	[  +1.333669] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff ba f3 f0 ed d9 71 08 06
	[  +0.323712] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 19 09 ed 61 6c 08 06
	[  +0.055042] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 ed dc cd fa 06 08 06
	[Sep16 17:18] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e c5 fc d3 80 02 08 06
	[  +0.096493] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 0f 00 b6 c1 f4 08 06
	[ +26.268961] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 c5 a8 10 d2 85 08 06
	[  +0.000580] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 61 9e bb 7c 6d 08 06
	[Sep16 17:27] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 03 d4 4e 61 be 08 06
	[Sep16 17:28] IPv4: martian source 10.244.0.36 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 8e ad 9e 67 2a 08 06
	[  +1.949689] IPv4: martian source 10.244.0.22 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 61 9e bb 7c 6d 08 06
	
	
	==> etcd [4bb47611cde2] <==
	{"level":"info","ts":"2024-09-16T17:15:49.047337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T17:15:49.047398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T17:15:49.047435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-16T17:15:49.047464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T17:15:49.047477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T17:15:49.047489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T17:15:49.047502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T17:15:49.048501Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-539053 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T17:15:49.048506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:15:49.048530Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:15:49.048543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:15:49.048893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T17:15:49.048934Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T17:15:49.049709Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:15:49.049791Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:15:49.049817Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:15:49.049789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:15:49.049851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:15:49.050664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T17:15:49.051530Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-16T17:17:09.601392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.36106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T17:17:09.601492Z","caller":"traceutil/trace.go:171","msg":"trace[1523185871] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1150; }","duration":"110.473203ms","start":"2024-09-16T17:17:09.491000Z","end":"2024-09-16T17:17:09.601474Z","steps":["trace[1523185871] 'range keys from in-memory index tree'  (duration: 110.241782ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:25:49.176285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1867}
	{"level":"info","ts":"2024-09-16T17:25:49.202756Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1867,"took":"25.896596ms","hash":2187363218,"current-db-size-bytes":8957952,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4980736,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-16T17:25:49.202798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2187363218,"revision":1867,"compact-revision":-1}
	
	
	==> gcp-auth [1d005c612eb5] <==
	2024/09/16 17:19:27 Ready to write response ...
	2024/09/16 17:27:29 Ready to marshal response ...
	2024/09/16 17:27:29 Ready to write response ...
	2024/09/16 17:27:29 Ready to marshal response ...
	2024/09/16 17:27:29 Ready to write response ...
	2024/09/16 17:27:39 Ready to marshal response ...
	2024/09/16 17:27:39 Ready to write response ...
	2024/09/16 17:27:39 Ready to marshal response ...
	2024/09/16 17:27:39 Ready to write response ...
	2024/09/16 17:27:42 Ready to marshal response ...
	2024/09/16 17:27:42 Ready to write response ...
	2024/09/16 17:27:42 Ready to marshal response ...
	2024/09/16 17:27:42 Ready to write response ...
	2024/09/16 17:28:13 Ready to marshal response ...
	2024/09/16 17:28:13 Ready to write response ...
	2024/09/16 17:28:19 Ready to marshal response ...
	2024/09/16 17:28:19 Ready to write response ...
	2024/09/16 17:28:25 Ready to marshal response ...
	2024/09/16 17:28:25 Ready to write response ...
	2024/09/16 17:28:25 Ready to marshal response ...
	2024/09/16 17:28:25 Ready to write response ...
	2024/09/16 17:28:25 Ready to marshal response ...
	2024/09/16 17:28:25 Ready to write response ...
	2024/09/16 17:28:31 Ready to marshal response ...
	2024/09/16 17:28:31 Ready to write response ...
	
	
	==> kernel <==
	 17:28:41 up  1:11,  0 users,  load average: 0.76, 0.43, 0.54
	Linux addons-539053 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [adee2fab665b] <==
	W0916 17:19:19.251510       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0916 17:19:19.351255       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0916 17:19:19.669676       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0916 17:27:49.597327       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 17:27:51.723728       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0916 17:27:58.033561       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0916 17:28:14.407767       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 17:28:15.423270       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 17:28:19.861018       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 17:28:20.053742       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.93.193"}
	I0916 17:28:25.652699       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.205.176"}
	I0916 17:28:28.764908       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:28:28.764977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:28:28.777502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:28:28.777554       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:28:28.778613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:28:28.778652       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:28:28.861947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:28:28.861997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:28:28.866605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:28:28.866641       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 17:28:29.779344       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0916 17:28:29.866668       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0916 17:28:29.960520       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0916 17:28:31.532682       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.40.186"}
	
	
	==> kube-controller-manager [ddf08198399f] <==
	W0916 17:28:33.984028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:33.984074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:28:34.058954       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:34.059005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:28:34.065659       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0916 17:28:34.067293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.327µs"
	I0916 17:28:34.070938       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0916 17:28:34.446339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:34.446384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:28:35.207677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="86.966µs"
	I0916 17:28:35.225080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="6.297113ms"
	I0916 17:28:35.225148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="37.321µs"
	W0916 17:28:35.437541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:35.437583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:28:36.314711       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:36.314756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:28:38.037090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:38.037131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:28:38.311213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.108163ms"
	I0916 17:28:38.311337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="73.395µs"
	W0916 17:28:38.527605       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:38.527645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:28:39.948014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.666µs"
	W0916 17:28:39.978990       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:28:39.979030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [464a6082c92d] <==
	I0916 17:16:01.252716       1 server_linux.go:66] "Using iptables proxy"
	I0916 17:16:01.762647       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 17:16:01.762715       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:16:02.262215       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 17:16:02.262283       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:16:02.267440       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:16:02.267789       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:16:02.267810       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:16:02.348480       1 config.go:199] "Starting service config controller"
	I0916 17:16:02.348530       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:16:02.348639       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:16:02.348646       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:16:02.348818       1 config.go:328] "Starting node config controller"
	I0916 17:16:02.348827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:16:02.454376       1 shared_informer.go:320] Caches are synced for node config
	I0916 17:16:02.454423       1 shared_informer.go:320] Caches are synced for service config
	I0916 17:16:02.454448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [71598060e153] <==
	E0916 17:15:50.960483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:50.960008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 17:15:50.960493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0916 17:15:50.960528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:50.959849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 17:15:50.960581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:50.960624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:15:50.960640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:51.783739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:15:51.783781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:51.813061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 17:15:51.813107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:51.892590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 17:15:51.892636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:51.963473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 17:15:51.963509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:52.015875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:15:52.015922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:52.015880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 17:15:52.015967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:52.023098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 17:15:52.023140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:15:52.068599       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 17:15:52.068641       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 17:15:54.058149       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 17:28:37 addons-539053 kubelet[2444]: I0916 17:28:37.575222    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-27pt7\" (UniqueName: \"kubernetes.io/projected/7d00cfb5-8716-43fc-a702-f7bfee1398e8-kube-api-access-27pt7\") on node \"addons-539053\" DevicePath \"\""
	Sep 16 17:28:38 addons-539053 kubelet[2444]: I0916 17:28:38.300733    2444 scope.go:117] "RemoveContainer" containerID="076d4be4b58e4948fa5151c3c46abe6d974fb02a9bc025801258f081bab349f3"
	Sep 16 17:28:38 addons-539053 kubelet[2444]: I0916 17:28:38.304827    2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-t96vs" podStartSLOduration=2.032126352 podStartE2EDuration="7.304806731s" podCreationTimestamp="2024-09-16 17:28:31 +0000 UTC" firstStartedPulling="2024-09-16 17:28:32.17695026 +0000 UTC m=+758.944708611" lastFinishedPulling="2024-09-16 17:28:37.449630646 +0000 UTC m=+764.217388990" observedRunningTime="2024-09-16 17:28:38.304748894 +0000 UTC m=+765.072507254" watchObservedRunningTime="2024-09-16 17:28:38.304806731 +0000 UTC m=+765.072565094"
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.311810    2444 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d00cfb5-8716-43fc-a702-f7bfee1398e8" path="/var/lib/kubelet/pods/7d00cfb5-8716-43fc-a702-f7bfee1398e8/volumes"
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.686307    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/652bfafb-557c-47cb-954a-a64b55d522e1-gcp-creds\") pod \"652bfafb-557c-47cb-954a-a64b55d522e1\" (UID: \"652bfafb-557c-47cb-954a-a64b55d522e1\") "
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.686372    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pxcl\" (UniqueName: \"kubernetes.io/projected/652bfafb-557c-47cb-954a-a64b55d522e1-kube-api-access-7pxcl\") pod \"652bfafb-557c-47cb-954a-a64b55d522e1\" (UID: \"652bfafb-557c-47cb-954a-a64b55d522e1\") "
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.686419    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652bfafb-557c-47cb-954a-a64b55d522e1-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "652bfafb-557c-47cb-954a-a64b55d522e1" (UID: "652bfafb-557c-47cb-954a-a64b55d522e1"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.688205    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652bfafb-557c-47cb-954a-a64b55d522e1-kube-api-access-7pxcl" (OuterVolumeSpecName: "kube-api-access-7pxcl") pod "652bfafb-557c-47cb-954a-a64b55d522e1" (UID: "652bfafb-557c-47cb-954a-a64b55d522e1"). InnerVolumeSpecName "kube-api-access-7pxcl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.787362    2444 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/652bfafb-557c-47cb-954a-a64b55d522e1-gcp-creds\") on node \"addons-539053\" DevicePath \"\""
	Sep 16 17:28:39 addons-539053 kubelet[2444]: I0916 17:28:39.787399    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7pxcl\" (UniqueName: \"kubernetes.io/projected/652bfafb-557c-47cb-954a-a64b55d522e1-kube-api-access-7pxcl\") on node \"addons-539053\" DevicePath \"\""
	Sep 16 17:28:40 addons-539053 kubelet[2444]: E0916 17:28:40.306282    2444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b96b00eb-2795-4229-96ae-80796f8fb299"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.331499    2444 scope.go:117] "RemoveContainer" containerID="dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.347552    2444 scope.go:117] "RemoveContainer" containerID="dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: E0916 17:28:40.349671    2444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba" containerID="dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.349716    2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba"} err="failed to get container status \"dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba\": rpc error: code = Unknown desc = Error response from daemon: No such container: dfd31e12ab9bfbc878163c2a80b52317b5ad35e716370d52f09f91e8aacd5aba"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.349752    2444 scope.go:117] "RemoveContainer" containerID="4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.365440    2444 scope.go:117] "RemoveContainer" containerID="4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: E0916 17:28:40.366143    2444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f" containerID="4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.366191    2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f"} err="failed to get container status \"4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4c73f7e2709ed5d4d891821b20b365174723c0b3873e814415e40e2ddc6f9e2f"
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.391498    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs54p\" (UniqueName: \"kubernetes.io/projected/556af332-2257-4db0-adcb-aca469cf992d-kube-api-access-rs54p\") pod \"556af332-2257-4db0-adcb-aca469cf992d\" (UID: \"556af332-2257-4db0-adcb-aca469cf992d\") "
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.391560    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjrql\" (UniqueName: \"kubernetes.io/projected/4849ea19-88f6-4fbc-ba0f-e290ee2d0d80-kube-api-access-pjrql\") pod \"4849ea19-88f6-4fbc-ba0f-e290ee2d0d80\" (UID: \"4849ea19-88f6-4fbc-ba0f-e290ee2d0d80\") "
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.393425    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556af332-2257-4db0-adcb-aca469cf992d-kube-api-access-rs54p" (OuterVolumeSpecName: "kube-api-access-rs54p") pod "556af332-2257-4db0-adcb-aca469cf992d" (UID: "556af332-2257-4db0-adcb-aca469cf992d"). InnerVolumeSpecName "kube-api-access-rs54p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.393585    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4849ea19-88f6-4fbc-ba0f-e290ee2d0d80-kube-api-access-pjrql" (OuterVolumeSpecName: "kube-api-access-pjrql") pod "4849ea19-88f6-4fbc-ba0f-e290ee2d0d80" (UID: "4849ea19-88f6-4fbc-ba0f-e290ee2d0d80"). InnerVolumeSpecName "kube-api-access-pjrql". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.491824    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rs54p\" (UniqueName: \"kubernetes.io/projected/556af332-2257-4db0-adcb-aca469cf992d-kube-api-access-rs54p\") on node \"addons-539053\" DevicePath \"\""
	Sep 16 17:28:40 addons-539053 kubelet[2444]: I0916 17:28:40.491855    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pjrql\" (UniqueName: \"kubernetes.io/projected/4849ea19-88f6-4fbc-ba0f-e290ee2d0d80-kube-api-access-pjrql\") on node \"addons-539053\" DevicePath \"\""
	
	
	==> storage-provisioner [81f5330bbecc] <==
	I0916 17:16:06.869134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:16:06.953264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:16:06.953322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:16:06.966188       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:16:06.966400       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-539053_e435f71f-8eeb-43e9-b83f-f401240355a1!
	I0916 17:16:06.967455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3863be-6285-47cc-9c29-e5f5df8524de", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-539053_e435f71f-8eeb-43e9-b83f-f401240355a1 became leader
	I0916 17:16:07.067858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-539053_e435f71f-8eeb-43e9-b83f-f401240355a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-539053 -n addons-539053
helpers_test.go:261: (dbg) Run:  kubectl --context addons-539053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-539053 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-539053 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-539053/192.168.49.2
	Start Time:       Mon, 16 Sep 2024 17:19:27 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-778rl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-778rl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to addons-539053
	  Warning  Failed     7m52s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m39s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m39s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m39s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m11s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.43s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.68
9 TestDownloadOnly/v1.20.0/DeleteAll 0.57
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.41
12 TestDownloadOnly/v1.31.1/json-events 13.5
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.74
22 TestOffline 47.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 207.19
29 TestAddons/serial/Volcano 42.59
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.44
35 TestAddons/parallel/InspektorGadget 10.59
36 TestAddons/parallel/MetricsServer 5.55
37 TestAddons/parallel/HelmTiller 10.85
39 TestAddons/parallel/CSI 59.91
40 TestAddons/parallel/Headlamp 16.83
41 TestAddons/parallel/CloudSpanner 6.42
42 TestAddons/parallel/LocalPath 55.78
43 TestAddons/parallel/NvidiaDevicePlugin 6.39
44 TestAddons/parallel/Yakd 10.54
45 TestAddons/StoppedEnableDisable 11.07
46 TestCertOptions 24.64
47 TestCertExpiration 231.14
48 TestDockerFlags 37.6
49 TestForceSystemdFlag 37.14
50 TestForceSystemdEnv 29.21
52 TestKVMDriverInstallOrUpdate 9.68
56 TestErrorSpam/setup 23.44
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.81
59 TestErrorSpam/pause 1.1
60 TestErrorSpam/unpause 1.26
61 TestErrorSpam/stop 10.82
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 33.9
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.41
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
73 TestFunctional/serial/CacheCmd/cache/add_local 1.84
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.23
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 39.31
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.93
84 TestFunctional/serial/LogsFileCmd 0.93
85 TestFunctional/serial/InvalidService 4.42
87 TestFunctional/parallel/ConfigCmd 0.41
88 TestFunctional/parallel/DashboardCmd 10.86
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 0.9
95 TestFunctional/parallel/ServiceCmdConnect 13.48
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 34.38
99 TestFunctional/parallel/SSHCmd 0.61
100 TestFunctional/parallel/CpCmd 1.63
101 TestFunctional/parallel/MySQL 24.28
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.61
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
111 TestFunctional/parallel/License 0.57
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
117 TestFunctional/parallel/ServiceCmd/DeployApp 13.14
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.35
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
127 TestFunctional/parallel/MountCmd/any-port 7.85
128 TestFunctional/parallel/ServiceCmd/List 0.49
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
131 TestFunctional/parallel/ServiceCmd/Format 0.32
132 TestFunctional/parallel/ServiceCmd/URL 0.39
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.6
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
139 TestFunctional/parallel/ImageCommands/ImageBuild 7.16
140 TestFunctional/parallel/ImageCommands/Setup 2.66
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
142 TestFunctional/parallel/MountCmd/specific-port 2.06
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.09
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
147 TestFunctional/parallel/DockerEnv/bash 0.89
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.56
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 99.2
161 TestMultiControlPlane/serial/DeployApp 6.3
162 TestMultiControlPlane/serial/PingHostFromPods 1.03
163 TestMultiControlPlane/serial/AddWorkerNode 23.21
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
166 TestMultiControlPlane/serial/CopyFile 15.06
167 TestMultiControlPlane/serial/StopSecondaryNode 11.4
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.45
169 TestMultiControlPlane/serial/RestartSecondaryNode 68.28
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.61
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 199.85
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.17
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
174 TestMultiControlPlane/serial/StopCluster 32.32
175 TestMultiControlPlane/serial/RestartCluster 80.96
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
177 TestMultiControlPlane/serial/AddSecondaryNode 35.58
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
181 TestImageBuild/serial/Setup 21.43
182 TestImageBuild/serial/NormalBuild 3.75
183 TestImageBuild/serial/BuildWithBuildArg 1.16
184 TestImageBuild/serial/BuildWithDockerIgnore 0.8
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
189 TestJSONOutput/start/Command 40.84
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.47
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.43
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.82
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 22.94
215 TestKicCustomNetwork/use_default_bridge_network 24.86
216 TestKicExistingNetwork 22.66
217 TestKicCustomSubnet 23.56
218 TestKicStaticIP 25.49
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 48.14
223 TestMountStart/serial/StartWithMountFirst 7.52
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 9.97
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.44
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.63
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 73.63
235 TestMultiNode/serial/DeployApp2Nodes 53.28
236 TestMultiNode/serial/PingHostFrom2Pods 0.7
237 TestMultiNode/serial/AddNode 18.37
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.29
240 TestMultiNode/serial/CopyFile 8.72
241 TestMultiNode/serial/StopNode 2.04
242 TestMultiNode/serial/StartAfterStop 9.78
243 TestMultiNode/serial/RestartKeepsNodes 100.51
244 TestMultiNode/serial/DeleteNode 5.11
245 TestMultiNode/serial/StopMultiNode 21.44
246 TestMultiNode/serial/RestartMultiNode 48.03
247 TestMultiNode/serial/ValidateNameConflict 25.86
252 TestPreload 129.25
254 TestScheduledStopUnix 96.78
255 TestSkaffold 103.64
257 TestInsufficientStorage 12.6
258 TestRunningBinaryUpgrade 62.85
260 TestKubernetesUpgrade 333.51
261 TestMissingContainerUpgrade 181.18
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 35.33
265 TestNoKubernetes/serial/StartWithStopK8s 19.24
266 TestStoppedBinaryUpgrade/Setup 3.31
267 TestStoppedBinaryUpgrade/Upgrade 154.19
268 TestNoKubernetes/serial/Start 10.18
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
270 TestNoKubernetes/serial/ProfileList 0.82
271 TestNoKubernetes/serial/Stop 1.18
272 TestNoKubernetes/serial/StartNoArgs 7.83
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
294 TestPause/serial/Start 38.24
295 TestPause/serial/SecondStartNoReconfiguration 27.03
296 TestNetworkPlugins/group/auto/Start 41.59
297 TestNetworkPlugins/group/kindnet/Start 62.25
298 TestPause/serial/Pause 0.52
299 TestPause/serial/VerifyStatus 0.3
300 TestPause/serial/Unpause 0.47
301 TestPause/serial/PauseAgain 0.63
302 TestPause/serial/DeletePaused 3.21
303 TestPause/serial/VerifyDeletedResources 0.59
304 TestNetworkPlugins/group/calico/Start 62.37
305 TestNetworkPlugins/group/auto/KubeletFlags 0.34
306 TestNetworkPlugins/group/auto/NetCatPod 10.25
307 TestNetworkPlugins/group/auto/DNS 0.18
308 TestNetworkPlugins/group/auto/Localhost 0.18
309 TestNetworkPlugins/group/auto/HairPin 0.17
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
312 TestNetworkPlugins/group/custom-flannel/Start 38.76
313 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/false/Start 34.67
316 TestNetworkPlugins/group/calico/KubeletFlags 0.28
317 TestNetworkPlugins/group/calico/NetCatPod 11.21
318 TestNetworkPlugins/group/kindnet/DNS 0.16
319 TestNetworkPlugins/group/kindnet/Localhost 0.15
320 TestNetworkPlugins/group/kindnet/HairPin 0.14
321 TestNetworkPlugins/group/calico/DNS 0.19
322 TestNetworkPlugins/group/calico/Localhost 0.13
323 TestNetworkPlugins/group/calico/HairPin 0.14
324 TestNetworkPlugins/group/bridge/Start 38.67
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
327 TestNetworkPlugins/group/false/KubeletFlags 0.31
328 TestNetworkPlugins/group/false/NetCatPod 10.22
329 TestNetworkPlugins/group/kubenet/Start 64.54
330 TestNetworkPlugins/group/false/DNS 0.13
331 TestNetworkPlugins/group/false/Localhost 0.12
332 TestNetworkPlugins/group/false/HairPin 0.13
333 TestNetworkPlugins/group/custom-flannel/DNS 25.37
334 TestNetworkPlugins/group/flannel/Start 50.67
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
336 TestNetworkPlugins/group/bridge/NetCatPod 11.22
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
339 TestNetworkPlugins/group/bridge/DNS 0.13
340 TestNetworkPlugins/group/bridge/Localhost 0.12
341 TestNetworkPlugins/group/bridge/HairPin 0.11
342 TestNetworkPlugins/group/enable-default-cni/Start 44.86
344 TestStartStop/group/old-k8s-version/serial/FirstStart 124.52
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
346 TestNetworkPlugins/group/kubenet/NetCatPod 13.19
347 TestNetworkPlugins/group/kubenet/DNS 0.15
348 TestNetworkPlugins/group/kubenet/Localhost 0.13
349 TestNetworkPlugins/group/kubenet/HairPin 0.11
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
352 TestNetworkPlugins/group/flannel/NetCatPod 10.22
353 TestNetworkPlugins/group/flannel/DNS 0.15
354 TestNetworkPlugins/group/flannel/Localhost 0.12
355 TestNetworkPlugins/group/flannel/HairPin 0.11
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.63
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
360 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
364 TestStartStop/group/no-preload/serial/FirstStart 47.42
366 TestStartStop/group/embed-certs/serial/FirstStart 42.23
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.27
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.85
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.7
372 TestStartStop/group/no-preload/serial/DeployApp 9.26
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
374 TestStartStop/group/embed-certs/serial/DeployApp 9.28
375 TestStartStop/group/no-preload/serial/Stop 10.91
376 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
377 TestStartStop/group/embed-certs/serial/Stop 10.93
378 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
379 TestStartStop/group/no-preload/serial/SecondStart 262.67
380 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
381 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.96
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
383 TestStartStop/group/embed-certs/serial/SecondStart 263.39
384 TestStartStop/group/old-k8s-version/serial/Stop 11.25
385 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
386 TestStartStop/group/old-k8s-version/serial/SecondStart 25.39
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 31.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
390 TestStartStop/group/old-k8s-version/serial/Pause 2.24
392 TestStartStop/group/newest-cni/serial/FirstStart 31.84
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
395 TestStartStop/group/newest-cni/serial/Stop 10.72
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
397 TestStartStop/group/newest-cni/serial/SecondStart 15.69
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
401 TestStartStop/group/newest-cni/serial/Pause 2.29
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
404 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
405 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.18
406 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
410 TestStartStop/group/no-preload/serial/Pause 2.18
411 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
413 TestStartStop/group/embed-certs/serial/Pause 2.19
x
+
TestDownloadOnly/v1.20.0/json-events (35.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-329020 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-329020 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (35.240083059s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-329020
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-329020: exit status 85 (675.817995ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-329020 | jenkins | v1.34.0 | 16 Sep 24 17:14 UTC |          |
	|         | -p download-only-329020        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:14:24
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:14:24.335169  112854 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:14:24.335439  112854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:14:24.335449  112854 out.go:358] Setting ErrFile to fd 2...
	I0916 17:14:24.335453  112854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:14:24.335647  112854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	W0916 17:14:24.335788  112854 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-105988/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-105988/.minikube/config/config.json: no such file or directory
	I0916 17:14:24.336314  112854 out.go:352] Setting JSON to true
	I0916 17:14:24.337204  112854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3404,"bootTime":1726503460,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:14:24.337296  112854 start.go:139] virtualization: kvm guest
	I0916 17:14:24.339792  112854 out.go:97] [download-only-329020] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 17:14:24.339893  112854 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:14:24.339947  112854 notify.go:220] Checking for updates...
	I0916 17:14:24.341384  112854 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:14:24.342821  112854 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:14:24.344081  112854 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:14:24.345396  112854 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	I0916 17:14:24.346778  112854 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:14:24.349333  112854 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:14:24.349615  112854 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:14:24.370791  112854 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 17:14:24.370857  112854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:14:24.416783  112854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-16 17:14:24.407756102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:14:24.416885  112854 docker.go:318] overlay module found
	I0916 17:14:24.418702  112854 out.go:97] Using the docker driver based on user configuration
	I0916 17:14:24.418730  112854 start.go:297] selected driver: docker
	I0916 17:14:24.418737  112854 start.go:901] validating driver "docker" against <nil>
	I0916 17:14:24.418815  112854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:14:24.464360  112854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-16 17:14:24.455760067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:14:24.464540  112854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:14:24.465283  112854 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 17:14:24.465522  112854 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:14:24.467637  112854 out.go:169] Using Docker driver with root privileges
	I0916 17:14:24.469037  112854 cni.go:84] Creating CNI manager for ""
	I0916 17:14:24.469100  112854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 17:14:24.469171  112854 start.go:340] cluster config:
	{Name:download-only-329020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-329020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:14:24.470674  112854 out.go:97] Starting "download-only-329020" primary control-plane node in "download-only-329020" cluster
	I0916 17:14:24.470693  112854 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 17:14:24.472066  112854 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 17:14:24.472087  112854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:14:24.472134  112854 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 17:14:24.487510  112854 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 17:14:24.487687  112854 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 17:14:24.487789  112854 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 17:14:24.630195  112854 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 17:14:24.630224  112854 cache.go:56] Caching tarball of preloaded images
	I0916 17:14:24.630388  112854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:14:24.632370  112854 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 17:14:24.632388  112854 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:14:24.789700  112854 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 17:14:37.803499  112854 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:14:37.803600  112854 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:14:38.565795  112854 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 17:14:38.566145  112854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/download-only-329020/config.json ...
	I0916 17:14:38.566175  112854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/download-only-329020/config.json: {Name:mkb4bdd21a64352e16afe6b449e43c8cd988deec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:14:38.566339  112854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:14:38.566510  112854 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-105988/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-329020 host does not exist
	  To start a cluster, run: "minikube start -p download-only-329020"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-329020
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-101864 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-101864 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.495100959s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-101864
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-101864: exit status 85 (58.676852ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-329020 | jenkins | v1.34.0 | 16 Sep 24 17:14 UTC |                     |
	|         | -p download-only-329020        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:15 UTC |
	| delete  | -p download-only-329020        | download-only-329020 | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:15 UTC |
	| start   | -o=json --download-only        | download-only-101864 | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC |                     |
	|         | -p download-only-101864        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:15:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:15:01.239216  113304 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:15:01.239479  113304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:15:01.239489  113304 out.go:358] Setting ErrFile to fd 2...
	I0916 17:15:01.239495  113304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:15:01.239682  113304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:15:01.240240  113304 out.go:352] Setting JSON to true
	I0916 17:15:01.241130  113304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3441,"bootTime":1726503460,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:15:01.241235  113304 start.go:139] virtualization: kvm guest
	I0916 17:15:01.284792  113304 out.go:97] [download-only-101864] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:15:01.284970  113304 notify.go:220] Checking for updates...
	I0916 17:15:01.368083  113304 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:15:01.409950  113304 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:15:01.475265  113304 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:15:01.606586  113304 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	I0916 17:15:01.622427  113304 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:15:01.625044  113304 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:15:01.625291  113304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:15:01.645499  113304 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 17:15:01.645579  113304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:15:01.688433  113304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-16 17:15:01.679460667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:15:01.688541  113304 docker.go:318] overlay module found
	I0916 17:15:01.690284  113304 out.go:97] Using the docker driver based on user configuration
	I0916 17:15:01.690309  113304 start.go:297] selected driver: docker
	I0916 17:15:01.690315  113304 start.go:901] validating driver "docker" against <nil>
	I0916 17:15:01.690393  113304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:15:01.733734  113304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-16 17:15:01.725190895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:15:01.733938  113304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:15:01.734477  113304 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 17:15:01.734615  113304 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:15:01.736570  113304 out.go:169] Using Docker driver with root privileges
	I0916 17:15:01.737996  113304 cni.go:84] Creating CNI manager for ""
	I0916 17:15:01.738059  113304 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:15:01.738094  113304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:15:01.738185  113304 start.go:340] cluster config:
	{Name:download-only-101864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-101864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:15:01.739553  113304 out.go:97] Starting "download-only-101864" primary control-plane node in "download-only-101864" cluster
	I0916 17:15:01.739583  113304 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 17:15:01.740813  113304 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 17:15:01.740840  113304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:15:01.740943  113304 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 17:15:01.756586  113304 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 17:15:01.756689  113304 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 17:15:01.756709  113304 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 17:15:01.756719  113304 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 17:15:01.756728  113304 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 17:15:02.003185  113304 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 17:15:02.003222  113304 cache.go:56] Caching tarball of preloaded images
	I0916 17:15:02.003402  113304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:15:02.005312  113304 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 17:15:02.005333  113304 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:15:02.161738  113304 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19649-105988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-101864 host does not exist
	  To start a cluster, run: "minikube start -p download-only-101864"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-101864
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-294705 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-294705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-294705
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-149473 --alsologtostderr --binary-mirror http://127.0.0.1:35485 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-149473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-149473
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (47.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-131057 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-131057 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (45.345404885s)
helpers_test.go:175: Cleaning up "offline-docker-131057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-131057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-131057: (2.088996449s)
--- PASS: TestOffline (47.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-539053
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-539053: exit status 85 (51.455944ms)

                                                
                                                
-- stdout --
	* Profile "addons-539053" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-539053"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-539053
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-539053: exit status 85 (50.887127ms)

                                                
                                                
-- stdout --
	* Profile "addons-539053" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-539053"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-539053 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-539053 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m27.187502431s)
--- PASS: TestAddons/Setup (207.19s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 13.037499ms
addons_test.go:905: volcano-admission stabilized in 13.093183ms
addons_test.go:913: volcano-controller stabilized in 13.174602ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-tsjps" [101d91ff-24c7-4618-91c6-a665873c700d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003509522s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-4hql5" [1badd488-604f-4265-8772-a8b7ac7a2307] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003573627s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2hxlv" [010a9db5-6831-455e-b8a4-98f50c6587c2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003484594s
addons_test.go:932: (dbg) Run:  kubectl --context addons-539053 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-539053 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-539053 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1fe26864-7cdf-4c8f-a967-1a2c5db12b8f] Pending
helpers_test.go:344: "test-job-nginx-0" [1fe26864-7cdf-4c8f-a967-1a2c5db12b8f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [1fe26864-7cdf-4c8f-a967-1a2c5db12b8f] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.003809413s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable volcano --alsologtostderr -v=1: (10.250859698s)
--- PASS: TestAddons/serial/Volcano (42.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-539053 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-539053 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-539053 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-539053 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-539053 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df5bf92d-b229-423e-a7fe-4e5868c7c1b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df5bf92d-b229-423e-a7fe-4e5868c7c1b8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003293134s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-539053 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable ingress-dns --alsologtostderr -v=1: (1.447132658s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable ingress --alsologtostderr -v=1: (7.854733127s)
--- PASS: TestAddons/parallel/Ingress (21.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l6zvq" [5f39e778-b07b-4c62-9d79-f21e128a8bf2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004275642s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-539053
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-539053: (5.589100199s)
--- PASS: TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.841558ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-26pvn" [27fac5e6-437c-408b-b822-b4e2b393d323] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003445906s
addons_test.go:417: (dbg) Run:  kubectl --context addons-539053 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.335085ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-42m8q" [f1e7c66d-de05-47ef-b306-073ce6ee059d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003895051s
addons_test.go:475: (dbg) Run:  kubectl --context addons-539053 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-539053 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.373567174s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.428523ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-539053 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-539053 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [50fbac86-b1be-47a7-8a49-ee01a4650093] Pending
helpers_test.go:344: "task-pv-pod" [50fbac86-b1be-47a7-8a49-ee01a4650093] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [50fbac86-b1be-47a7-8a49-ee01a4650093] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002671013s
addons_test.go:590: (dbg) Run:  kubectl --context addons-539053 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-539053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-539053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-539053 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-539053 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-539053 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-539053 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ab011793-50e6-427d-90e2-199ef038cfaa] Pending
helpers_test.go:344: "task-pv-pod-restore" [ab011793-50e6-427d-90e2-199ef038cfaa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ab011793-50e6-427d-90e2-199ef038cfaa] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003268409s
addons_test.go:632: (dbg) Run:  kubectl --context addons-539053 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-539053 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-539053 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.417422237s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-539053 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-fbdl8" [3c7782ae-b07a-4e57-96be-04677a613ebe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-fbdl8" [3c7782ae-b07a-4e57-96be-04677a613ebe] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003271226s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (16.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-7z458" [7a628e81-a164-45a1-afd5-7ee936f9341b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002881003s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-539053
--- PASS: TestAddons/parallel/CloudSpanner (6.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-539053 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-539053 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2774c956-505e-4b0b-9d0f-3be4982c59da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2774c956-505e-4b0b-9d0f-3be4982c59da] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2774c956-505e-4b0b-9d0f-3be4982c59da] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003173234s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-539053 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 ssh "cat /opt/local-path-provisioner/pvc-1389ca84-3e21-4c35-b54d-991231b2f504_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-539053 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-539053 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.951784449s)
--- PASS: TestAddons/parallel/LocalPath (55.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vz86s" [aff62fc3-161b-49c5-9c01-0794bb7a44ae] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003700715s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-539053
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4r2gd" [7f9f9809-d977-4bda-a0da-ab91a4af6f84] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004057858s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-539053 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-539053 addons disable yakd --alsologtostderr -v=1: (5.537227278s)
--- PASS: TestAddons/parallel/Yakd (10.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-539053
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-539053: (10.837121023s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-539053
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-539053
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-539053
--- PASS: TestAddons/StoppedEnableDisable (11.07s)

                                                
                                    
x
+
TestCertOptions (24.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-091173 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-091173 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (22.06178835s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-091173 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-091173 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-091173 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-091173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-091173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-091173: (2.03896116s)
--- PASS: TestCertOptions (24.64s)

                                                
                                    
x
+
TestCertExpiration (231.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-820655 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-820655 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.70981783s)
E0916 18:01:33.733543  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-820655 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0916 18:03:40.528385  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:44.245578  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:50.770262  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-820655 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (22.208333771s)
helpers_test.go:175: Cleaning up "cert-expiration-820655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-820655
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-820655: (3.219673143s)
--- PASS: TestCertExpiration (231.14s)

                                                
                                    
x
+
TestDockerFlags (37.6s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-198310 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-198310 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.705521108s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-198310 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-198310 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-198310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-198310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-198310: (2.209809305s)
--- PASS: TestDockerFlags (37.60s)

                                                
                                    
x
+
TestForceSystemdFlag (37.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-286093 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-286093 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.438730877s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-286093 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-286093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-286093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-286093: (2.33297458s)
--- PASS: TestForceSystemdFlag (37.14s)

                                                
                                    
x
+
TestForceSystemdEnv (29.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-108643 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-108643 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.874684141s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-108643 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-108643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-108643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-108643: (2.052814176s)
--- PASS: TestForceSystemdEnv (29.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (9.68s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (9.68s)

                                                
                                    
x
+
TestErrorSpam/setup (23.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-854700 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-854700 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-854700 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-854700 --driver=docker  --container-runtime=docker: (23.437610528s)
--- PASS: TestErrorSpam/setup (23.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 pause
--- PASS: TestErrorSpam/pause (1.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 unpause
--- PASS: TestErrorSpam/unpause (1.26s)

                                                
                                    
x
+
TestErrorSpam/stop (10.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 stop: (10.646992633s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854700 --log_dir /tmp/nospam-854700 stop
--- PASS: TestErrorSpam/stop (10.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-105988/.minikube/files/etc/test/nested/copy/112842/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (33.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-623374 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (33.900288833s)
--- PASS: TestFunctional/serial/StartWithProxy (33.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-623374 --alsologtostderr -v=8: (32.408586811s)
functional_test.go:663: soft start took 32.409399153s for "functional-623374" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-623374 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-623374 /tmp/TestFunctionalserialCacheCmdcacheadd_local3624108532/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache add minikube-local-cache-test:functional-623374
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-623374 cache add minikube-local-cache-test:functional-623374: (1.519059423s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache delete minikube-local-cache-test:functional-623374
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-623374
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.129907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 kubectl -- --context functional-623374 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-623374 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-623374 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.307152104s)
functional_test.go:761: restart took 39.307286644s for "functional-623374" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-623374 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 logs --file /tmp/TestFunctionalserialLogsFileCmd2110703091/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-623374 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-623374
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-623374: exit status 115 (300.003809ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30682 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-623374 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 config get cpus: exit status 14 (102.122429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 config get cpus: exit status 14 (58.567397ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-623374 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-623374 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 165494: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-623374 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (150.687314ms)

                                                
                                                
-- stdout --
	* [functional-623374] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:31:48.673197  164711 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:31:48.673586  164711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:31:48.673599  164711 out.go:358] Setting ErrFile to fd 2...
	I0916 17:31:48.673607  164711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:31:48.674101  164711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:31:48.674940  164711 out.go:352] Setting JSON to false
	I0916 17:31:48.676624  164711 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4449,"bootTime":1726503460,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:31:48.676824  164711 start.go:139] virtualization: kvm guest
	I0916 17:31:48.679426  164711 out.go:177] * [functional-623374] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:31:48.680758  164711 notify.go:220] Checking for updates...
	I0916 17:31:48.680783  164711 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:31:48.682037  164711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:31:48.683310  164711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:31:48.685086  164711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	I0916 17:31:48.686329  164711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:31:48.687586  164711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:31:48.689039  164711 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:31:48.689498  164711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:31:48.712175  164711 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 17:31:48.712275  164711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:31:48.761674  164711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-16 17:31:48.752389629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:31:48.761782  164711 docker.go:318] overlay module found
	I0916 17:31:48.763539  164711 out.go:177] * Using the docker driver based on existing profile
	I0916 17:31:48.764811  164711 start.go:297] selected driver: docker
	I0916 17:31:48.764827  164711 start.go:901] validating driver "docker" against &{Name:functional-623374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-623374 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:31:48.764942  164711 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:31:48.767082  164711 out.go:201] 
	W0916 17:31:48.768456  164711 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 17:31:48.769602  164711 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-623374 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-623374 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (163.267437ms)

                                                
                                                
-- stdout --
	* [functional-623374] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:31:48.506223  164531 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:31:48.506373  164531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:31:48.506386  164531 out.go:358] Setting ErrFile to fd 2...
	I0916 17:31:48.506392  164531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:31:48.506648  164531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:31:48.507190  164531 out.go:352] Setting JSON to false
	I0916 17:31:48.508450  164531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4448,"bootTime":1726503460,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:31:48.508516  164531 start.go:139] virtualization: kvm guest
	I0916 17:31:48.510948  164531 out.go:177] * [functional-623374] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 17:31:48.512368  164531 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:31:48.512394  164531 notify.go:220] Checking for updates...
	I0916 17:31:48.515026  164531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:31:48.516291  164531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	I0916 17:31:48.517549  164531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	I0916 17:31:48.518835  164531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:31:48.520189  164531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:31:48.522105  164531 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:31:48.522842  164531 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:31:48.545946  164531 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 17:31:48.546034  164531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:31:48.608355  164531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-16 17:31:48.596657731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:31:48.608451  164531 docker.go:318] overlay module found
	I0916 17:31:48.610470  164531 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0916 17:31:48.611872  164531 start.go:297] selected driver: docker
	I0916 17:31:48.611891  164531 start.go:901] validating driver "docker" against &{Name:functional-623374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-623374 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:31:48.612025  164531 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:31:48.614570  164531 out.go:201] 
	W0916 17:31:48.615832  164531 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 17:31:48.617373  164531 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-623374 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-623374 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5f2c4" [012e5043-9892-4036-b369-56a6aa1f874e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5f2c4" [012e5043-9892-4036-b369-56a6aa1f874e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004222168s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32195
functional_test.go:1675: http://192.168.49.2:32195: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5f2c4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32195
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9e047fec-e335-4887-b741-3c748db8cec8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003576263s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-623374 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-623374 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-623374 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-623374 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9a3b133f-d3ab-4fcb-801c-fc2ab314aed3] Pending
helpers_test.go:344: "sp-pod" [9a3b133f-d3ab-4fcb-801c-fc2ab314aed3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9a3b133f-d3ab-4fcb-801c-fc2ab314aed3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003855619s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-623374 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-623374 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-623374 delete -f testdata/storage-provisioner/pod.yaml: (1.498617869s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-623374 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [32a3c521-cc0f-4f68-8808-c78eab6915cb] Pending
helpers_test.go:344: "sp-pod" [32a3c521-cc0f-4f68-8808-c78eab6915cb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [32a3c521-cc0f-4f68-8808-c78eab6915cb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.089807294s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-623374 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh -n functional-623374 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cp functional-623374:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd750056873/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh -n functional-623374 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh -n functional-623374 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-623374 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-pp7br" [885d3d90-b536-4a49-a484-1f95e1d056bc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-pp7br" [885d3d90-b536-4a49-a484-1f95e1d056bc] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00363504s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-623374 exec mysql-6cdb49bbb-pp7br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-623374 exec mysql-6cdb49bbb-pp7br -- mysql -ppassword -e "show databases;": exit status 1 (109.424623ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-623374 exec mysql-6cdb49bbb-pp7br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-623374 exec mysql-6cdb49bbb-pp7br -- mysql -ppassword -e "show databases;": exit status 1 (103.853375ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-623374 exec mysql-6cdb49bbb-pp7br -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/112842/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /etc/test/nested/copy/112842/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/112842.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /etc/ssl/certs/112842.pem"
2024/09/16 17:31:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/112842.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /usr/share/ca-certificates/112842.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1128422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /etc/ssl/certs/1128422.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1128422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /usr/share/ca-certificates/1128422.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-623374 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh "sudo systemctl is-active crio": exit status 1 (224.259738ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 161536: os: process already finished
helpers_test.go:502: unable to terminate pid 161156: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-623374 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b3303752-620d-4445-8012-bfebebb32996] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b3303752-620d-4445-8012-bfebebb32996] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004010463s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-623374 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-623374 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-cqxpf" [a2f710f5-2c2b-4f28-9ec0-132650a3b733] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-cqxpf" [a2f710f5-2c2b-4f28-9ec0-132650a3b733] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004926553s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-623374 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.126.148 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-623374 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "303.771368ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.6009ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "340.090903ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.16535ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdany-port3098083806/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726507906332571227" to /tmp/TestFunctionalparallelMountCmdany-port3098083806/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726507906332571227" to /tmp/TestFunctionalparallelMountCmdany-port3098083806/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726507906332571227" to /tmp/TestFunctionalparallelMountCmdany-port3098083806/001/test-1726507906332571227
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.118616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 17:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 17:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 17:31 test-1726507906332571227
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh cat /mount-9p/test-1726507906332571227
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-623374 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a75b5544-63a0-4095-b428-67168a49e68e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a75b5544-63a0-4095-b428-67168a49e68e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a75b5544-63a0-4095-b428-67168a49e68e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004077036s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-623374 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdany-port3098083806/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service list -o json
functional_test.go:1494: Took "493.269651ms" to run "out/minikube-linux-amd64 -p functional-623374 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32091
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32091
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-623374 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-623374
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-623374
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-623374 image ls --format short --alsologtostderr:
I0916 17:32:00.891763  169856 out.go:345] Setting OutFile to fd 1 ...
I0916 17:32:00.891930  169856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:00.891944  169856 out.go:358] Setting ErrFile to fd 2...
I0916 17:32:00.891951  169856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:00.892271  169856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
I0916 17:32:00.893143  169856 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:00.893302  169856 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:00.893824  169856 cli_runner.go:164] Run: docker container inspect functional-623374 --format={{.State.Status}}
I0916 17:32:00.911959  169856 ssh_runner.go:195] Run: systemctl --version
I0916 17:32:00.912000  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-623374
I0916 17:32:00.928563  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/functional-623374/id_rsa Username:docker}
I0916 17:32:01.018021  169856 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-623374 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kicbase/echo-server               | functional-623374 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-623374 | 8d1592a56e86b | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-623374 image ls --format table --alsologtostderr:
I0916 17:32:01.148298  170037 out.go:345] Setting OutFile to fd 1 ...
I0916 17:32:01.148538  170037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.148546  170037 out.go:358] Setting ErrFile to fd 2...
I0916 17:32:01.148550  170037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.148704  170037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
I0916 17:32:01.149288  170037 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.149384  170037 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.149775  170037 cli_runner.go:164] Run: docker container inspect functional-623374 --format={{.State.Status}}
I0916 17:32:01.166114  170037 ssh_runner.go:195] Run: systemctl --version
I0916 17:32:01.166155  170037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-623374
I0916 17:32:01.182955  170037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/functional-623374/id_rsa Username:docker}
I0916 17:32:01.266186  170037 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-623374 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-623374"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.i
o/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"8d1592a56e86b91d46ac09c82731de0b75c207623f7fa59a626fd9f1b850885a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-623374"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":
"43800000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-623374 image ls --format json --alsologtostderr:
I0916 17:32:01.085347  169993 out.go:345] Setting OutFile to fd 1 ...
I0916 17:32:01.085636  169993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.085647  169993 out.go:358] Setting ErrFile to fd 2...
I0916 17:32:01.085653  169993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.085875  169993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
I0916 17:32:01.086554  169993 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.086686  169993 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.087082  169993 cli_runner.go:164] Run: docker container inspect functional-623374 --format={{.State.Status}}
I0916 17:32:01.104721  169993 ssh_runner.go:195] Run: systemctl --version
I0916 17:32:01.104774  169993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-623374
I0916 17:32:01.121712  169993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/functional-623374/id_rsa Username:docker}
I0916 17:32:01.210106  169993 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-623374 image ls --format yaml --alsologtostderr:
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 8d1592a56e86b91d46ac09c82731de0b75c207623f7fa59a626fd9f1b850885a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-623374
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-623374
size: "4940000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-623374 image ls --format yaml --alsologtostderr:
I0916 17:32:01.279659  170100 out.go:345] Setting OutFile to fd 1 ...
I0916 17:32:01.279946  170100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.279957  170100 out.go:358] Setting ErrFile to fd 2...
I0916 17:32:01.279963  170100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.280201  170100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
I0916 17:32:01.280864  170100 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.281004  170100 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.281389  170100 cli_runner.go:164] Run: docker container inspect functional-623374 --format={{.State.Status}}
I0916 17:32:01.299169  170100 ssh_runner.go:195] Run: systemctl --version
I0916 17:32:01.299211  170100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-623374
I0916 17:32:01.317408  170100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/functional-623374/id_rsa Username:docker}
I0916 17:32:01.406322  170100 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh pgrep buildkitd: exit status 1 (245.035076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image build -t localhost/my-image:functional-623374 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-623374 image build -t localhost/my-image:functional-623374 testdata/build --alsologtostderr: (6.687578592s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-623374 image build -t localhost/my-image:functional-623374 testdata/build --alsologtostderr:
I0916 17:32:01.588826  170338 out.go:345] Setting OutFile to fd 1 ...
I0916 17:32:01.588964  170338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.588973  170338 out.go:358] Setting ErrFile to fd 2...
I0916 17:32:01.588977  170338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:32:01.589138  170338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
I0916 17:32:01.589769  170338 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.590352  170338 config.go:182] Loaded profile config "functional-623374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:32:01.590755  170338 cli_runner.go:164] Run: docker container inspect functional-623374 --format={{.State.Status}}
I0916 17:32:01.607262  170338 ssh_runner.go:195] Run: systemctl --version
I0916 17:32:01.607314  170338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-623374
I0916 17:32:01.623877  170338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/functional-623374/id_rsa Username:docker}
I0916 17:32:01.710334  170338 build_images.go:161] Building image from path: /tmp/build.689685377.tar
I0916 17:32:01.710394  170338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 17:32:01.718308  170338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.689685377.tar
I0916 17:32:01.721202  170338 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.689685377.tar: stat -c "%s %y" /var/lib/minikube/build/build.689685377.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.689685377.tar': No such file or directory
I0916 17:32:01.721227  170338 ssh_runner.go:362] scp /tmp/build.689685377.tar --> /var/lib/minikube/build/build.689685377.tar (3072 bytes)
I0916 17:32:01.742178  170338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.689685377
I0916 17:32:01.749737  170338 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.689685377 -xf /var/lib/minikube/build/build.689685377.tar
I0916 17:32:01.757441  170338 docker.go:360] Building image: /var/lib/minikube/build/build.689685377
I0916 17:32:01.757497  170338 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-623374 /var/lib/minikube/build/build.689685377
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 1.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 2.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 2.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:5337fec44bc20100c94fd68df0f83fc96d4e2464dc3d34878c91be39f942f2be done
#8 naming to localhost/my-image:functional-623374 done
#8 DONE 0.0s
I0916 17:32:08.200930  170338 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-623374 /var/lib/minikube/build/build.689685377: (6.44340234s)
I0916 17:32:08.201021  170338 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.689685377
I0916 17:32:08.211940  170338 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.689685377.tar
I0916 17:32:08.221155  170338 build_images.go:217] Built localhost/my-image:functional-623374 from /tmp/build.689685377.tar
I0916 17:32:08.221189  170338 build_images.go:133] succeeded building to: functional-623374
I0916 17:32:08.221195  170338 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.633150174s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-623374
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image load --daemon kicbase/echo-server:functional-623374 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-623374 image load --daemon kicbase/echo-server:functional-623374 --alsologtostderr: (1.023601712s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdspecific-port64363558/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.15622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdspecific-port64363558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh "sudo umount -f /mount-9p": exit status 1 (260.1385ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-623374 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdspecific-port64363558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image load --daemon kicbase/echo-server:functional-623374 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.216444582s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-623374
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image load --daemon kicbase/echo-server:functional-623374 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T" /mount1: exit status 1 (361.348976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-623374 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-623374 /tmp/TestFunctionalparallelMountCmdVerifyCleanup737391450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image save kicbase/echo-server:functional-623374 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-623374 docker-env) && out/minikube-linux-amd64 status -p functional-623374"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-623374 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image rm kicbase/echo-server:functional-623374 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-623374
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-623374 image save --daemon kicbase/echo-server:functional-623374 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-623374
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-623374
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-623374
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-623374
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-271535 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0916 17:33:44.245680  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.252469  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.263825  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.285342  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.326688  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.408145  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.569821  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:44.891311  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:45.533331  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:46.815337  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:49.377387  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:54.499286  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:34:04.741176  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-271535 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.548930999s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-271535 -- rollout status deployment/busybox: (4.203075065s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-dfhd8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-qhtsb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-rc2t8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-dfhd8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-qhtsb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-rc2t8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-dfhd8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-qhtsb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-rc2t8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-dfhd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-dfhd8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-qhtsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-qhtsb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-rc2t8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-271535 -- exec busybox-7dff88458-rc2t8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-271535 -v=7 --alsologtostderr
E0916 17:34:25.223233  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-271535 -v=7 --alsologtostderr: (22.433891482s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-271535 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp testdata/cp-test.txt ha-271535:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile952033790/001/cp-test_ha-271535.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535:/home/docker/cp-test.txt ha-271535-m02:/home/docker/cp-test_ha-271535_ha-271535-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test_ha-271535_ha-271535-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535:/home/docker/cp-test.txt ha-271535-m03:/home/docker/cp-test_ha-271535_ha-271535-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test_ha-271535_ha-271535-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535:/home/docker/cp-test.txt ha-271535-m04:/home/docker/cp-test_ha-271535_ha-271535-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test_ha-271535_ha-271535-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp testdata/cp-test.txt ha-271535-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile952033790/001/cp-test_ha-271535-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m02:/home/docker/cp-test.txt ha-271535:/home/docker/cp-test_ha-271535-m02_ha-271535.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test_ha-271535-m02_ha-271535.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m02:/home/docker/cp-test.txt ha-271535-m03:/home/docker/cp-test_ha-271535-m02_ha-271535-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test_ha-271535-m02_ha-271535-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m02:/home/docker/cp-test.txt ha-271535-m04:/home/docker/cp-test_ha-271535-m02_ha-271535-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test_ha-271535-m02_ha-271535-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp testdata/cp-test.txt ha-271535-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile952033790/001/cp-test_ha-271535-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m03:/home/docker/cp-test.txt ha-271535:/home/docker/cp-test_ha-271535-m03_ha-271535.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test_ha-271535-m03_ha-271535.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m03:/home/docker/cp-test.txt ha-271535-m02:/home/docker/cp-test_ha-271535-m03_ha-271535-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test_ha-271535-m03_ha-271535-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m03:/home/docker/cp-test.txt ha-271535-m04:/home/docker/cp-test_ha-271535-m03_ha-271535-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test_ha-271535-m03_ha-271535-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp testdata/cp-test.txt ha-271535-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile952033790/001/cp-test_ha-271535-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m04:/home/docker/cp-test.txt ha-271535:/home/docker/cp-test_ha-271535-m04_ha-271535.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535 "sudo cat /home/docker/cp-test_ha-271535-m04_ha-271535.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m04:/home/docker/cp-test.txt ha-271535-m02:/home/docker/cp-test_ha-271535-m04_ha-271535-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m02 "sudo cat /home/docker/cp-test_ha-271535-m04_ha-271535-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 cp ha-271535-m04:/home/docker/cp-test.txt ha-271535-m03:/home/docker/cp-test_ha-271535-m04_ha-271535-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 ssh -n ha-271535-m03 "sudo cat /home/docker/cp-test_ha-271535-m04_ha-271535-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-271535 node stop m02 -v=7 --alsologtostderr: (10.782134013s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr: exit status 7 (620.596233ms)

                                                
                                                
-- stdout --
	ha-271535
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-271535-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271535-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-271535-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:35:02.581492  197723 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:35:02.581734  197723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:35:02.581743  197723 out.go:358] Setting ErrFile to fd 2...
	I0916 17:35:02.581747  197723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:35:02.581935  197723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:35:02.582141  197723 out.go:352] Setting JSON to false
	I0916 17:35:02.582172  197723 mustload.go:65] Loading cluster: ha-271535
	I0916 17:35:02.582286  197723 notify.go:220] Checking for updates...
	I0916 17:35:02.582594  197723 config.go:182] Loaded profile config "ha-271535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:35:02.582610  197723 status.go:255] checking status of ha-271535 ...
	I0916 17:35:02.582999  197723 cli_runner.go:164] Run: docker container inspect ha-271535 --format={{.State.Status}}
	I0916 17:35:02.601020  197723 status.go:330] ha-271535 host status = "Running" (err=<nil>)
	I0916 17:35:02.601044  197723 host.go:66] Checking if "ha-271535" exists ...
	I0916 17:35:02.601299  197723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271535
	I0916 17:35:02.618485  197723 host.go:66] Checking if "ha-271535" exists ...
	I0916 17:35:02.618718  197723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:35:02.618762  197723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271535
	I0916 17:35:02.633912  197723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/ha-271535/id_rsa Username:docker}
	I0916 17:35:02.718969  197723 ssh_runner.go:195] Run: systemctl --version
	I0916 17:35:02.722852  197723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:35:02.733100  197723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:35:02.780858  197723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:73 SystemTime:2024-09-16 17:35:02.77064926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:35:02.781565  197723 kubeconfig.go:125] found "ha-271535" server: "https://192.168.49.254:8443"
	I0916 17:35:02.781612  197723 api_server.go:166] Checking apiserver status ...
	I0916 17:35:02.781666  197723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:35:02.792533  197723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2382/cgroup
	I0916 17:35:02.800947  197723 api_server.go:182] apiserver freezer: "5:freezer:/docker/1b291f6034170d6128d882901537235424a86b74e7957ea92588a06a4f02356c/kubepods/burstable/pode84ad78b59805d30f319f4e34159ddc1/af33208f9fc940dd89c52f931bdf3df183318f94a809a85b8f6f48409fa228ff"
	I0916 17:35:02.801002  197723 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b291f6034170d6128d882901537235424a86b74e7957ea92588a06a4f02356c/kubepods/burstable/pode84ad78b59805d30f319f4e34159ddc1/af33208f9fc940dd89c52f931bdf3df183318f94a809a85b8f6f48409fa228ff/freezer.state
	I0916 17:35:02.808539  197723 api_server.go:204] freezer state: "THAWED"
	I0916 17:35:02.808566  197723 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 17:35:02.813531  197723 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 17:35:02.813551  197723 status.go:422] ha-271535 apiserver status = Running (err=<nil>)
	I0916 17:35:02.813560  197723 status.go:257] ha-271535 status: &{Name:ha-271535 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:35:02.813579  197723 status.go:255] checking status of ha-271535-m02 ...
	I0916 17:35:02.813822  197723 cli_runner.go:164] Run: docker container inspect ha-271535-m02 --format={{.State.Status}}
	I0916 17:35:02.830256  197723 status.go:330] ha-271535-m02 host status = "Stopped" (err=<nil>)
	I0916 17:35:02.830286  197723 status.go:343] host is not running, skipping remaining checks
	I0916 17:35:02.830292  197723 status.go:257] ha-271535-m02 status: &{Name:ha-271535-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:35:02.830316  197723 status.go:255] checking status of ha-271535-m03 ...
	I0916 17:35:02.830591  197723 cli_runner.go:164] Run: docker container inspect ha-271535-m03 --format={{.State.Status}}
	I0916 17:35:02.846823  197723 status.go:330] ha-271535-m03 host status = "Running" (err=<nil>)
	I0916 17:35:02.846847  197723 host.go:66] Checking if "ha-271535-m03" exists ...
	I0916 17:35:02.847141  197723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271535-m03
	I0916 17:35:02.863744  197723 host.go:66] Checking if "ha-271535-m03" exists ...
	I0916 17:35:02.863988  197723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:35:02.864026  197723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271535-m03
	I0916 17:35:02.879400  197723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/ha-271535-m03/id_rsa Username:docker}
	I0916 17:35:02.966857  197723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:35:02.977300  197723 kubeconfig.go:125] found "ha-271535" server: "https://192.168.49.254:8443"
	I0916 17:35:02.977328  197723 api_server.go:166] Checking apiserver status ...
	I0916 17:35:02.977365  197723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:35:02.987138  197723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2264/cgroup
	I0916 17:35:02.995166  197723 api_server.go:182] apiserver freezer: "5:freezer:/docker/5d13d5e5e2d6a0879a22fc100beb0279371aebf64c85262481e4a63c62eef1d6/kubepods/burstable/pod64a8a025b0dea85500116033dbe3d47c/ae309e9423482dd7d4b7cd6ca8d09cc2e04faa58c218f9e94b59c60299e4692f"
	I0916 17:35:02.995236  197723 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5d13d5e5e2d6a0879a22fc100beb0279371aebf64c85262481e4a63c62eef1d6/kubepods/burstable/pod64a8a025b0dea85500116033dbe3d47c/ae309e9423482dd7d4b7cd6ca8d09cc2e04faa58c218f9e94b59c60299e4692f/freezer.state
	I0916 17:35:03.002515  197723 api_server.go:204] freezer state: "THAWED"
	I0916 17:35:03.002537  197723 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 17:35:03.006183  197723 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 17:35:03.006205  197723 status.go:422] ha-271535-m03 apiserver status = Running (err=<nil>)
	I0916 17:35:03.006214  197723 status.go:257] ha-271535-m03 status: &{Name:ha-271535-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:35:03.006229  197723 status.go:255] checking status of ha-271535-m04 ...
	I0916 17:35:03.006446  197723 cli_runner.go:164] Run: docker container inspect ha-271535-m04 --format={{.State.Status}}
	I0916 17:35:03.022838  197723 status.go:330] ha-271535-m04 host status = "Running" (err=<nil>)
	I0916 17:35:03.022885  197723 host.go:66] Checking if "ha-271535-m04" exists ...
	I0916 17:35:03.023255  197723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271535-m04
	I0916 17:35:03.039571  197723 host.go:66] Checking if "ha-271535-m04" exists ...
	I0916 17:35:03.039887  197723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:35:03.039933  197723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271535-m04
	I0916 17:35:03.056716  197723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/ha-271535-m04/id_rsa Username:docker}
	I0916 17:35:03.146837  197723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:35:03.157141  197723 status.go:257] ha-271535-m04 status: &{Name:ha-271535-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (68.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 node start m02 -v=7 --alsologtostderr
E0916 17:35:06.185301  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-271535 node start m02 -v=7 --alsologtostderr: (1m7.430700584s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (68.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (199.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-271535 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-271535 -v=7 --alsologtostderr
E0916 17:36:28.108477  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.734287  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.740641  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.752056  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.773812  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.815238  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:33.896656  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:34.058241  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:34.380277  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:35.022481  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:36.304758  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:38.866235  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:36:43.988033  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-271535 -v=7 --alsologtostderr: (33.762171662s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-271535 --wait=true -v=7 --alsologtostderr
E0916 17:36:54.229984  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:37:14.712062  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:37:55.674242  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:38:44.245432  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:39:11.950302  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:39:17.596339  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-271535 --wait=true -v=7 --alsologtostderr: (2m45.990064895s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-271535
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (199.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-271535 node delete m03 -v=7 --alsologtostderr: (8.452381204s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-271535 stop -v=7 --alsologtostderr: (32.223098162s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr: exit status 7 (98.23018ms)

                                                
                                                
-- stdout --
	ha-271535
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271535-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271535-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:40:14.215828  227344 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:40:14.216202  227344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:40:14.216214  227344 out.go:358] Setting ErrFile to fd 2...
	I0916 17:40:14.216220  227344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:40:14.216492  227344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:40:14.216699  227344 out.go:352] Setting JSON to false
	I0916 17:40:14.216735  227344 mustload.go:65] Loading cluster: ha-271535
	I0916 17:40:14.216841  227344 notify.go:220] Checking for updates...
	I0916 17:40:14.217297  227344 config.go:182] Loaded profile config "ha-271535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:40:14.217397  227344 status.go:255] checking status of ha-271535 ...
	I0916 17:40:14.217987  227344 cli_runner.go:164] Run: docker container inspect ha-271535 --format={{.State.Status}}
	I0916 17:40:14.237438  227344 status.go:330] ha-271535 host status = "Stopped" (err=<nil>)
	I0916 17:40:14.237457  227344 status.go:343] host is not running, skipping remaining checks
	I0916 17:40:14.237463  227344 status.go:257] ha-271535 status: &{Name:ha-271535 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:40:14.237497  227344 status.go:255] checking status of ha-271535-m02 ...
	I0916 17:40:14.237750  227344 cli_runner.go:164] Run: docker container inspect ha-271535-m02 --format={{.State.Status}}
	I0916 17:40:14.253870  227344 status.go:330] ha-271535-m02 host status = "Stopped" (err=<nil>)
	I0916 17:40:14.253889  227344 status.go:343] host is not running, skipping remaining checks
	I0916 17:40:14.253895  227344 status.go:257] ha-271535-m02 status: &{Name:ha-271535-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:40:14.253911  227344 status.go:255] checking status of ha-271535-m04 ...
	I0916 17:40:14.254209  227344 cli_runner.go:164] Run: docker container inspect ha-271535-m04 --format={{.State.Status}}
	I0916 17:40:14.269785  227344 status.go:330] ha-271535-m04 host status = "Stopped" (err=<nil>)
	I0916 17:40:14.269803  227344 status.go:343] host is not running, skipping remaining checks
	I0916 17:40:14.269809  227344 status.go:257] ha-271535-m04 status: &{Name:ha-271535-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-271535 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0916 17:41:33.732767  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-271535 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.229650796s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-271535 --control-plane -v=7 --alsologtostderr
E0916 17:42:01.438172  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-271535 --control-plane -v=7 --alsologtostderr: (34.771282239s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-271535 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-235285 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-235285 --driver=docker  --container-runtime=docker: (21.427263803s)
--- PASS: TestImageBuild/serial/Setup (21.43s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-235285
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-235285: (3.752418646s)
--- PASS: TestImageBuild/serial/NormalBuild (3.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-235285
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-235285: (1.161249164s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-235285
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-235285
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-875025 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-875025 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.844386046s)
--- PASS: TestJSONOutput/start/Command (40.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-875025 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-875025 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-875025 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-875025 --output=json --user=testUser: (10.820329289s)
--- PASS: TestJSONOutput/stop/Command (10.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-717984 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-717984 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.749278ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2e387e36-9be0-4511-b257-c73ba7475300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-717984] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"122673bd-9fcc-4ba3-b10a-a7e82b9dff8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"d0fa6c61-e294-42f4-bca1-c0d3fcc952cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b0e8b4d9-1675-4239-aa0a-bf44f71619cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig"}}
	{"specversion":"1.0","id":"509657d0-1dc7-48ba-9d0d-490954c090e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube"}}
	{"specversion":"1.0","id":"ccc08c96-dc36-4049-9608-b7f2ea7bc7bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"36d0fbbd-dbe5-478f-bfa9-abe5f3335f8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d8e5457f-ad4a-437a-9fa6-d2243881f43c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-717984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-717984
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-903098 --network=
E0916 17:43:44.244856  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-903098 --network=: (20.982035312s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-903098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-903098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-903098: (1.942828417s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.94s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-315236 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-315236 --network=bridge: (23.031141693s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-315236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-315236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-315236: (1.805845064s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.86s)

                                                
                                    
x
+
TestKicExistingNetwork (22.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-900459 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-900459 --network=existing-network: (20.635617711s)
helpers_test.go:175: Cleaning up "existing-network-900459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-900459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-900459: (1.8875483s)
--- PASS: TestKicExistingNetwork (22.66s)

                                                
                                    
x
+
TestKicCustomSubnet (23.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-906004 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-906004 --subnet=192.168.60.0/24: (21.535282861s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-906004 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-906004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-906004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-906004: (2.012043071s)
--- PASS: TestKicCustomSubnet (23.56s)

                                                
                                    
x
+
TestKicStaticIP (25.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-907250 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-907250 --static-ip=192.168.200.200: (23.355817026s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-907250 ip
helpers_test.go:175: Cleaning up "static-ip-907250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-907250
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-907250: (2.014905993s)
--- PASS: TestKicStaticIP (25.49s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (48.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-517824 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-517824 --driver=docker  --container-runtime=docker: (21.247079449s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-530479 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-530479 --driver=docker  --container-runtime=docker: (21.886614471s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-517824
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-530479
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-530479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-530479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-530479: (2.012069711s)
helpers_test.go:175: Cleaning up "first-517824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-517824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-517824: (1.997144331s)
--- PASS: TestMinikubeProfile (48.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-527869 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0916 17:46:33.734253  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-527869 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.523749474s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-527869 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-541108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-541108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.973352081s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-541108 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-527869 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-527869 --alsologtostderr -v=5: (1.441644529s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-541108 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-541108
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-541108: (1.165065612s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-541108
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-541108: (7.628366372s)
--- PASS: TestMountStart/serial/RestartStopped (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-541108 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-664989 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-664989 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m13.209461572s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (53.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-664989 -- rollout status deployment/busybox: (4.225392207s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0916 17:48:44.244928  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-bvz29 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-m2gf9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-bvz29 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-m2gf9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-bvz29 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-m2gf9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (53.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-bvz29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-bvz29 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-m2gf9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-664989 -- exec busybox-7dff88458-m2gf9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-664989 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-664989 -v 3 --alsologtostderr: (17.802910504s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-664989 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp testdata/cp-test.txt multinode-664989:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2419946094/001/cp-test_multinode-664989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989:/home/docker/cp-test.txt multinode-664989-m02:/home/docker/cp-test_multinode-664989_multinode-664989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test_multinode-664989_multinode-664989-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989:/home/docker/cp-test.txt multinode-664989-m03:/home/docker/cp-test_multinode-664989_multinode-664989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test_multinode-664989_multinode-664989-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp testdata/cp-test.txt multinode-664989-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2419946094/001/cp-test_multinode-664989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m02:/home/docker/cp-test.txt multinode-664989:/home/docker/cp-test_multinode-664989-m02_multinode-664989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test_multinode-664989-m02_multinode-664989.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m02:/home/docker/cp-test.txt multinode-664989-m03:/home/docker/cp-test_multinode-664989-m02_multinode-664989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test_multinode-664989-m02_multinode-664989-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp testdata/cp-test.txt multinode-664989-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2419946094/001/cp-test_multinode-664989-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m03:/home/docker/cp-test.txt multinode-664989:/home/docker/cp-test_multinode-664989-m03_multinode-664989.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989 "sudo cat /home/docker/cp-test_multinode-664989-m03_multinode-664989.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 cp multinode-664989-m03:/home/docker/cp-test.txt multinode-664989-m02:/home/docker/cp-test_multinode-664989-m03_multinode-664989-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 ssh -n multinode-664989-m02 "sudo cat /home/docker/cp-test_multinode-664989-m03_multinode-664989-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-664989 node stop m03: (1.169352569s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-664989 status: exit status 7 (429.936219ms)

                                                
                                                
-- stdout --
	multinode-664989
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-664989-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-664989-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr: exit status 7 (436.868694ms)

                                                
                                                
-- stdout --
	multinode-664989
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-664989-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-664989-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:49:39.395013  314262 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:49:39.395270  314262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:49:39.395278  314262 out.go:358] Setting ErrFile to fd 2...
	I0916 17:49:39.395283  314262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:49:39.395440  314262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:49:39.395592  314262 out.go:352] Setting JSON to false
	I0916 17:49:39.395618  314262 mustload.go:65] Loading cluster: multinode-664989
	I0916 17:49:39.395656  314262 notify.go:220] Checking for updates...
	I0916 17:49:39.396003  314262 config.go:182] Loaded profile config "multinode-664989": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:49:39.396018  314262 status.go:255] checking status of multinode-664989 ...
	I0916 17:49:39.396386  314262 cli_runner.go:164] Run: docker container inspect multinode-664989 --format={{.State.Status}}
	I0916 17:49:39.413999  314262 status.go:330] multinode-664989 host status = "Running" (err=<nil>)
	I0916 17:49:39.414039  314262 host.go:66] Checking if "multinode-664989" exists ...
	I0916 17:49:39.414305  314262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-664989
	I0916 17:49:39.429613  314262 host.go:66] Checking if "multinode-664989" exists ...
	I0916 17:49:39.429881  314262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:49:39.429926  314262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-664989
	I0916 17:49:39.446287  314262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/multinode-664989/id_rsa Username:docker}
	I0916 17:49:39.538852  314262 ssh_runner.go:195] Run: systemctl --version
	I0916 17:49:39.542548  314262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:49:39.552719  314262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 17:49:39.599116  314262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:63 SystemTime:2024-09-16 17:49:39.590177534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 17:49:39.599700  314262 kubeconfig.go:125] found "multinode-664989" server: "https://192.168.67.2:8443"
	I0916 17:49:39.599736  314262 api_server.go:166] Checking apiserver status ...
	I0916 17:49:39.599778  314262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:49:39.610362  314262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2349/cgroup
	I0916 17:49:39.618863  314262 api_server.go:182] apiserver freezer: "5:freezer:/docker/828461b018c4f2c88b3bc02e866f00d97af2a7b2e1f73326f055dd98b52ab854/kubepods/burstable/pod2a86918581bff1e674c7d98809adf33b/f4e81ac8eb78aef785ed47af39a03f0d74eb9ba0d3a28d8bdb4687a061700bec"
	I0916 17:49:39.618933  314262 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/828461b018c4f2c88b3bc02e866f00d97af2a7b2e1f73326f055dd98b52ab854/kubepods/burstable/pod2a86918581bff1e674c7d98809adf33b/f4e81ac8eb78aef785ed47af39a03f0d74eb9ba0d3a28d8bdb4687a061700bec/freezer.state
	I0916 17:49:39.626287  314262 api_server.go:204] freezer state: "THAWED"
	I0916 17:49:39.626309  314262 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 17:49:39.629835  314262 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 17:49:39.629854  314262 status.go:422] multinode-664989 apiserver status = Running (err=<nil>)
	I0916 17:49:39.629864  314262 status.go:257] multinode-664989 status: &{Name:multinode-664989 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:49:39.629879  314262 status.go:255] checking status of multinode-664989-m02 ...
	I0916 17:49:39.630127  314262 cli_runner.go:164] Run: docker container inspect multinode-664989-m02 --format={{.State.Status}}
	I0916 17:49:39.646828  314262 status.go:330] multinode-664989-m02 host status = "Running" (err=<nil>)
	I0916 17:49:39.646851  314262 host.go:66] Checking if "multinode-664989-m02" exists ...
	I0916 17:49:39.647099  314262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-664989-m02
	I0916 17:49:39.661899  314262 host.go:66] Checking if "multinode-664989-m02" exists ...
	I0916 17:49:39.662164  314262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:49:39.662206  314262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-664989-m02
	I0916 17:49:39.677703  314262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19649-105988/.minikube/machines/multinode-664989-m02/id_rsa Username:docker}
	I0916 17:49:39.762714  314262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:49:39.772565  314262 status.go:257] multinode-664989-m02 status: &{Name:multinode-664989-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:49:39.772595  314262 status.go:255] checking status of multinode-664989-m03 ...
	I0916 17:49:39.772839  314262 cli_runner.go:164] Run: docker container inspect multinode-664989-m03 --format={{.State.Status}}
	I0916 17:49:39.789250  314262 status.go:330] multinode-664989-m03 host status = "Stopped" (err=<nil>)
	I0916 17:49:39.789273  314262 status.go:343] host is not running, skipping remaining checks
	I0916 17:49:39.789280  314262 status.go:257] multinode-664989-m03 status: &{Name:multinode-664989-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-664989 node start m03 -v=7 --alsologtostderr: (9.151703047s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-664989
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-664989
E0916 17:50:07.314020  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-664989: (22.361674544s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-664989 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-664989 --wait=true -v=8 --alsologtostderr: (1m18.06442846s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-664989
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 node delete m03
E0916 17:51:33.733362  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-664989 node delete m03: (4.57711196s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-664989 stop: (21.280712239s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-664989 status: exit status 7 (81.100589ms)

                                                
                                                
-- stdout --
	multinode-664989
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-664989-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr: exit status 7 (75.6857ms)

                                                
                                                
-- stdout --
	multinode-664989
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-664989-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:51:56.602260  329661 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:51:56.602367  329661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:51:56.602377  329661 out.go:358] Setting ErrFile to fd 2...
	I0916 17:51:56.602382  329661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:51:56.602562  329661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-105988/.minikube/bin
	I0916 17:51:56.602736  329661 out.go:352] Setting JSON to false
	I0916 17:51:56.602766  329661 mustload.go:65] Loading cluster: multinode-664989
	I0916 17:51:56.602803  329661 notify.go:220] Checking for updates...
	I0916 17:51:56.603131  329661 config.go:182] Loaded profile config "multinode-664989": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:51:56.603146  329661 status.go:255] checking status of multinode-664989 ...
	I0916 17:51:56.603549  329661 cli_runner.go:164] Run: docker container inspect multinode-664989 --format={{.State.Status}}
	I0916 17:51:56.620059  329661 status.go:330] multinode-664989 host status = "Stopped" (err=<nil>)
	I0916 17:51:56.620091  329661 status.go:343] host is not running, skipping remaining checks
	I0916 17:51:56.620098  329661 status.go:257] multinode-664989 status: &{Name:multinode-664989 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:51:56.620139  329661 status.go:255] checking status of multinode-664989-m02 ...
	I0916 17:51:56.620385  329661 cli_runner.go:164] Run: docker container inspect multinode-664989-m02 --format={{.State.Status}}
	I0916 17:51:56.635717  329661 status.go:330] multinode-664989-m02 host status = "Stopped" (err=<nil>)
	I0916 17:51:56.635740  329661 status.go:343] host is not running, skipping remaining checks
	I0916 17:51:56.635747  329661 status.go:257] multinode-664989-m02 status: &{Name:multinode-664989-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-664989 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-664989 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (47.493757682s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-664989 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-664989
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-664989-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-664989-m02 --driver=docker  --container-runtime=docker: exit status 14 (62.256329ms)

                                                
                                                
-- stdout --
	* [multinode-664989-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-664989-m02' is duplicated with machine name 'multinode-664989-m02' in profile 'multinode-664989'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-664989-m03 --driver=docker  --container-runtime=docker
E0916 17:52:56.800951  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-664989-m03 --driver=docker  --container-runtime=docker: (23.470297876s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-664989
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-664989: exit status 80 (259.563515ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-664989 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-664989-m03 already exists in multinode-664989-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-664989-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-664989-m03: (2.024611668s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.86s)

                                                
                                    
x
+
TestPreload (129.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-240612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0916 17:53:44.245329  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-240612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m18.289765422s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-240612 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-240612 image pull gcr.io/k8s-minikube/busybox: (2.273773699s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-240612
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-240612: (10.705940995s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-240612 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-240612 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (35.684875665s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-240612 image list
helpers_test.go:175: Cleaning up "test-preload-240612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-240612
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-240612: (2.099919945s)
--- PASS: TestPreload (129.25s)

                                                
                                    
x
+
TestScheduledStopUnix (96.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-151098 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-151098 --memory=2048 --driver=docker  --container-runtime=docker: (23.964715059s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151098 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-151098 -n scheduled-stop-151098
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151098 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151098 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151098 -n scheduled-stop-151098
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151098
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151098 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0916 17:56:33.734257  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151098
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-151098: exit status 7 (62.124435ms)

                                                
                                                
-- stdout --
	scheduled-stop-151098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151098 -n scheduled-stop-151098
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151098 -n scheduled-stop-151098: exit status 7 (60.725481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-151098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-151098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-151098: (1.593742023s)
--- PASS: TestScheduledStopUnix (96.78s)

                                                
                                    
x
+
TestSkaffold (103.64s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1001281070 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-258004 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-258004 --memory=2600 --driver=docker  --container-runtime=docker: (21.121729852s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1001281070 run --minikube-profile skaffold-258004 --kube-context skaffold-258004 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1001281070 run --minikube-profile skaffold-258004 --kube-context skaffold-258004 --status-check=true --port-forward=false --interactive=false: (1m4.766699787s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-77595c98b5-rdwzj" [f4313c85-4fec-4374-8179-ec7fc8e47c96] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003055532s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-ff8c9b5d7-cf7ff" [b782ee32-3cdd-493a-9b17-715bd07cb617] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004089508s
helpers_test.go:175: Cleaning up "skaffold-258004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-258004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-258004: (2.713401245s)
--- PASS: TestSkaffold (103.64s)

                                                
                                    
x
+
TestInsufficientStorage (12.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-800474 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0916 17:58:44.245317  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-800474 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.482420449s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d33828ca-396b-4b23-b65a-91bab165a009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-800474] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ea82cb2-3ef6-40b6-8238-0c984d1c206d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"5ee27dc8-3ce9-4b7d-a84b-0ba71437883e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3f6fe3f6-9c1d-4fb8-b457-674a25c725c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig"}}
	{"specversion":"1.0","id":"2122f0e6-6957-4f74-ba8d-03063f2d6c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube"}}
	{"specversion":"1.0","id":"f8b110d4-7c65-43f9-ba30-25fcc69b3bbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ad04090-028b-457f-80cc-f473392c9575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"40f3ae6c-0ff4-41e9-8ae8-64303cf7d756","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"32fd100b-91ef-4bcf-8d5e-ba7cad54bf3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4925e014-ae8c-43ab-9a97-b89b1e33e59c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"04fca61f-bd76-4e5e-9c72-9199e27f526d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af84c569-357d-4bd2-a4e5-58d97819c380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-800474\" primary control-plane node in \"insufficient-storage-800474\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ebca0af-990a-4042-90fa-4922db58cf7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726481311-19649 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8fccde7-e8f2-4844-bc74-d56be9441fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8d6a810-174d-48c8-a989-4357e89570f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-800474 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-800474 --output=json --layout=cluster: exit status 7 (245.370482ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-800474","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-800474","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 17:58:54.722030  370035 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-800474" does not appear in /home/jenkins/minikube-integration/19649-105988/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-800474 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-800474 --output=json --layout=cluster: exit status 7 (241.990846ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-800474","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-800474","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 17:58:54.964598  370134 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-800474" does not appear in /home/jenkins/minikube-integration/19649-105988/kubeconfig
	E0916 17:58:54.973861  370134 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/insufficient-storage-800474/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-800474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-800474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-800474: (1.63043952s)
--- PASS: TestInsufficientStorage (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1496973030 start -p running-upgrade-254577 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0916 18:03:30.274884  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.281309  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.292752  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.314172  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.355446  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.436830  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.598420  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:30.920147  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:31.562245  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:32.844566  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:03:35.406284  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1496973030 start -p running-upgrade-254577 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.457342518s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-254577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-254577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.969352322s)
helpers_test.go:175: Cleaning up "running-upgrade-254577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-254577
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-254577: (2.191022535s)
--- PASS: TestRunningBinaryUpgrade (62.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (333.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.986601125s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-878785
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-878785: (10.690800829s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-878785 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-878785 status --format={{.Host}}: exit status 7 (67.935983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m27.379208317s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-878785 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (83.965934ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878785] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-878785
	    minikube start -p kubernetes-upgrade-878785 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8787852 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-878785 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878785 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.474481217s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-878785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-878785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-878785: (3.761910615s)
--- PASS: TestKubernetesUpgrade (333.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1921535446 start -p missing-upgrade-508767 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1921535446 start -p missing-upgrade-508767 --memory=2200 --driver=docker  --container-runtime=docker: (1m59.436819061s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-508767
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-508767: (10.375852345s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-508767
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-508767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-508767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.255983735s)
helpers_test.go:175: Cleaning up "missing-upgrade-508767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-508767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-508767: (2.131249751s)
--- PASS: TestMissingContainerUpgrade (181.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.81258ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-144370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-105988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-105988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-144370 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-144370 --driver=docker  --container-runtime=docker: (34.990799106s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-144370 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --driver=docker  --container-runtime=docker: (17.283652784s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-144370 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-144370 status -o json: exit status 2 (310.272816ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-144370","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-144370
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-144370: (1.649130288s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.406492226 start -p stopped-upgrade-384199 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.406492226 start -p stopped-upgrade-384199 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m58.03467325s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.406492226 -p stopped-upgrade-384199 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.406492226 -p stopped-upgrade-384199 stop: (10.840519806s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-384199 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-384199 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.31911116s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-144370 --no-kubernetes --driver=docker  --container-runtime=docker: (10.184026421s)
--- PASS: TestNoKubernetes/serial/Start (10.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-144370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-144370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.822781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-144370
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-144370: (1.175548106s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-144370 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-144370 --driver=docker  --container-runtime=docker: (7.827583287s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-144370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-144370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.099679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-384199
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-384199: (1.102470356s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (38.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-704014 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-704014 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (38.24066398s)
--- PASS: TestPause/serial/Start (38.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-704014 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-704014 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.014226417s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (41.590485649s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m2.253304758s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-704014 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-704014 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-704014 --output=json --layout=cluster: exit status 2 (295.190896ms)

                                                
                                                
-- stdout --
	{"Name":"pause-704014","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-704014","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-704014 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-704014 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-704014 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-704014 --alsologtostderr -v=5: (3.214356894s)
--- PASS: TestPause/serial/DeletePaused (3.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-704014
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-704014: exit status 1 (17.911872ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-704014: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0916 18:04:11.251993  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m2.366266679s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmx52" [29f5055c-b992-4a5d-86ac-9a54de161753] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmx52" [29f5055c-b992-4a5d-86ac-9a54de161753] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003876054s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-045754 exec deployment/netcat -- nslookup kubernetes.default
E0916 18:04:52.214258  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gq7bh" [ee57bdc9-205f-47d0-972f-7e81f952fcc9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003525802s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (38.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (38.759363496s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (38.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2f2z2" [6bbb63f0-7bdc-40fb-9217-5a0f0fb5a2af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2f2z2" [6bbb63f0-7bdc-40fb-9217-5a0f0fb5a2af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00429332s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qxh5h" [90efa4f6-5f6c-4214-b1c1-65fd6a1b36ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004817095s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (34.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (34.673225433s)
--- PASS: TestNetworkPlugins/group/false/Start (34.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7xnrq" [2af86926-7f1d-4127-b6fc-28a9cdee9a80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7xnrq" [2af86926-7f1d-4127-b6fc-28a9cdee9a80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004116687s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (38.673953489s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vrjvf" [2a2dcf1a-3091-4044-a691-f2a45f039214] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vrjvf" [2a2dcf1a-3091-4044-a691-f2a45f039214] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.0038957s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8d65n" [c2e7dc78-6827-4279-8f86-1c27a71501d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8d65n" [c2e7dc78-6827-4279-8f86-1c27a71501d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004035361s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (64.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m4.54102342s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (64.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (25.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-045754 exec deployment/netcat -- nslookup kubernetes.default
E0916 18:06:14.136102  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context custom-flannel-045754 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140062062s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-045754 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context custom-flannel-045754 exec deployment/netcat -- nslookup kubernetes.default: exit status 137 (7.358944235s)

                                                
                                                
** stderr ** 
	command terminated with exit code 137

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (25.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (50.671167243s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hlss6" [b8ea1188-63cc-4152-a168-4c585768c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hlss6" [b8ea1188-63cc-4152-a168-4c585768c1d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003498647s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-045754 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (44.859155893s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m4.521379601s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zh4bb" [660aaeb6-7cfa-4735-a4aa-ef44fcf4f486] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zh4bb" [660aaeb6-7cfa-4735-a4aa-ef44fcf4f486] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.002944225s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c4sw8" [c0784727-f9f0-47e2-8835-3a7a1b2eb869] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003834134s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-27dxn" [ef2dba84-4323-4ba1-9aee-2b3afaeae2c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-27dxn" [ef2dba84-4323-4ba1-9aee-2b3afaeae2c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002865703s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-781540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-781540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (37.631883798s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-045754 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-045754 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g6stm" [ee7127e7-2c9d-4f7a-a7d5-54b0dbc375d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g6stm" [ee7127e7-2c9d-4f7a-a7d5-54b0dbc375d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004368805s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-045754 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-045754 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E0916 18:11:33.041572  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:33.377191  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:33.717780  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:33.733113  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:35.505555  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:43.618954  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.125823  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.132281  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.143654  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.165034  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.206476  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.287913  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.449417  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:58.771528  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:59.412953  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:00.694225  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:03.256420  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:04.100815  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:08.378273  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.639058  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.645422  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.656777  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.678167  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.719541  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.801089  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:12.962587  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:13.284409  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:13.925961  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:14.003385  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:14.679655  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:15.208141  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:17.769768  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:18.619600  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:22.891652  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:25.991267  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:33.133535  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.089866  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.096251  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.107584  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.128912  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.170350  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.251833  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.413401  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:36.735302  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:37.377127  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:38.659314  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:39.100901  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubenet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:41.221168  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:45.063014  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:46.343194  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:50.504570  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:53.615512  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-568140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-568140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (47.42248003s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-418003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-418003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (42.231197167s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-781540 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [79bb34e3-03cd-4d60-894e-26a3b827c4a0] Pending
helpers_test.go:344: "busybox" [79bb34e3-03cd-4d60-894e-26a3b827c4a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [79bb34e3-03cd-4d60-894e-26a3b827c4a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004329646s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-781540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-781540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-781540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-781540 --alsologtostderr -v=3
E0916 18:08:30.275163  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-781540 --alsologtostderr -v=3: (10.850154917s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540: exit status 7 (143.104614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-781540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-781540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-781540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.410516585s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-568140 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e6cea69-5959-4109-9af9-208ca449ac6b] Pending
helpers_test.go:344: "busybox" [0e6cea69-5959-4109-9af9-208ca449ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e6cea69-5959-4109-9af9-208ca449ac6b] Running
E0916 18:08:44.244889  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003324805s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-568140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-568140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-568140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-418003 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9fea52ec-8a81-4016-af39-26121e26b43b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9fea52ec-8a81-4016-af39-26121e26b43b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004387788s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-418003 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-568140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-568140 --alsologtostderr -v=3: (10.90974207s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-418003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-418003 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-418003 --alsologtostderr -v=3
E0916 18:08:57.977602  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-418003 --alsologtostderr -v=3: (10.929624774s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-568140 -n no-preload-568140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-568140 -n no-preload-568140: exit status 7 (120.823577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-568140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-568140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-568140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.38311248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-568140 -n no-preload-568140
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061237 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5bd051fc-1cbb-40e7-9431-44c8bfe8409f] Pending
helpers_test.go:344: "busybox" [5bd051fc-1cbb-40e7-9431-44c8bfe8409f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5bd051fc-1cbb-40e7-9431-44c8bfe8409f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003438692s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-061237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-061237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418003 -n embed-certs-418003
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418003 -n embed-certs-418003: exit status 7 (152.434455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-418003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-418003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-418003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.116817929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418003 -n embed-certs-418003
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-061237 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-061237 --alsologtostderr -v=3: (11.254213343s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061237 -n old-k8s-version-061237
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061237 -n old-k8s-version-061237: exit status 7 (78.91511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-061237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (25.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0916 18:09:36.802761  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/functional-623374/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.131651  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.138035  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.149464  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.170887  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.212345  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.293955  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.455528  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:42.776937  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:43.418507  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:44.700821  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (25.091330201s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061237 -n old-k8s-version-061237
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (25.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (31.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0916 18:09:47.262197  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:09:52.383935  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jj5sg" [1a98c0f1-4384-4d15-8c39-6dd6a5493437] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0916 18:10:02.625562  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.644180  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.650526  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.661971  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.683300  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.724740  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.806332  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:06.967921  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:07.289727  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:07.931826  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:09.213760  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jj5sg" [1a98c0f1-4384-4d15-8c39-6dd6a5493437] Running
E0916 18:10:11.775532  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.567716  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.574133  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.585493  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.606846  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.648263  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.729718  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:13.891269  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:14.212989  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:14.854340  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:16.136297  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 31.003886229s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (31.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jj5sg" [1a98c0f1-4384-4d15-8c39-6dd6a5493437] Running
E0916 18:10:16.897795  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:18.698161  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003614675s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-061237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-061237 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-061237 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061237 -n old-k8s-version-061237
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061237 -n old-k8s-version-061237: exit status 2 (279.23236ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061237 -n old-k8s-version-061237
E0916 18:10:23.107344  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061237 -n old-k8s-version-061237: exit status 2 (276.191101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-061237 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061237 -n old-k8s-version-061237
E0916 18:10:23.819856  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061237 -n old-k8s-version-061237
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-580036 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 18:10:27.139032  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:34.061695  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:47.621089  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kindnet-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.062963  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.069346  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.080725  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.102171  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.143776  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.225224  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.387188  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.708591  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.742132  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.748550  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.759977  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.781343  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.822707  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:52.904086  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:53.065677  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:53.350713  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:53.386992  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:54.028767  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:54.543218  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:54.632836  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:55.310408  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:57.195082  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:10:57.872249  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-580036 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (31.837060524s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-580036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-580036 --alsologtostderr -v=3
E0916 18:11:02.317213  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:02.994548  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:04.069615  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/auto-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-580036 --alsologtostderr -v=3: (10.718541957s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580036 -n newest-cni-580036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580036 -n newest-cni-580036: exit status 7 (67.929998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-580036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-580036 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 18:11:12.559338  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/custom-flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:13.235994  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/false-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.123447  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.129829  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.141247  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.162683  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.204157  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.285622  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.447179  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:23.769028  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:24.411020  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-580036 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (15.367974371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580036 -n newest-cni-580036
E0916 18:11:25.693076  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-580036 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-580036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580036 -n newest-cni-580036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580036 -n newest-cni-580036: exit status 2 (274.397433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580036 -n newest-cni-580036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580036 -n newest-cni-580036: exit status 2 (279.860571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-580036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580036 -n newest-cni-580036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580036 -n newest-cni-580036
E0916 18:11:28.255090  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/bridge-045754/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rgsv5" [ce101bcd-32f8-4efc-b713-54da7231dd34] Running
E0916 18:12:56.585323  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/enable-default-cni-045754/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:12:57.426884  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/calico-045754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00383908s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rgsv5" [ce101bcd-32f8-4efc-b713-54da7231dd34] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003694612s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-781540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-781540 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-781540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540: exit status 2 (271.91469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540: exit status 2 (276.03608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-781540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-781540 -n default-k8s-diff-port-781540
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-29x5p" [c1d9ae1b-0b00-49af-998d-6ecc5c2d3854] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00381396s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-29x5p" [c1d9ae1b-0b00-49af-998d-6ecc5c2d3854] Running
E0916 18:13:30.275298  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/skaffold-258004/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004336385s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-568140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t5m2v" [ffd5a0d8-1b08-4d19-b658-ae41f6f516c9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00380879s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-568140 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-568140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-568140 -n no-preload-568140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-568140 -n no-preload-568140: exit status 2 (273.676636ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-568140 -n no-preload-568140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-568140 -n no-preload-568140: exit status 2 (266.302227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-568140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-568140 -n no-preload-568140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-568140 -n no-preload-568140
E0916 18:13:34.577756  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/flannel-045754/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t5m2v" [ffd5a0d8-1b08-4d19-b658-ae41f6f516c9] Running
E0916 18:13:38.610745  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/no-preload-568140/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:39.892500  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/no-preload-568140/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:42.454727  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/no-preload-568140/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004314593s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-418003 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-418003 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-418003 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418003 -n embed-certs-418003
E0916 18:13:44.244764  112842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/addons-539053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418003 -n embed-certs-418003: exit status 2 (266.819878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418003 -n embed-certs-418003
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418003 -n embed-certs-418003: exit status 2 (267.265478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-418003 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418003 -n embed-certs-418003
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418003 -n embed-certs-418003
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.19s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-045754 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-045754" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 18:00:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-820655
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 18:00:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-878785
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-105988/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 18:01:34 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-508767
contexts:
- context:
cluster: cert-expiration-820655
extensions:
- extension:
last-update: Mon, 16 Sep 2024 18:00:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-820655
name: cert-expiration-820655
- context:
cluster: kubernetes-upgrade-878785
user: kubernetes-upgrade-878785
name: kubernetes-upgrade-878785
- context:
cluster: missing-upgrade-508767
extensions:
- extension:
last-update: Mon, 16 Sep 2024 18:01:34 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-508767
name: missing-upgrade-508767
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-820655
user:
client-certificate: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/cert-expiration-820655/client.crt
client-key: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/cert-expiration-820655/client.key
- name: kubernetes-upgrade-878785
user:
client-certificate: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubernetes-upgrade-878785/client.crt
client-key: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/kubernetes-upgrade-878785/client.key
- name: missing-upgrade-508767
user:
client-certificate: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/missing-upgrade-508767/client.crt
client-key: /home/jenkins/minikube-integration/19649-105988/.minikube/profiles/missing-upgrade-508767/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-045754

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-045754" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045754"

                                                
                                                
----------------------- debugLogs end: cilium-045754 [took: 3.407051622s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-045754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-045754
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-410031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-410031
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard