Test Report: Docker_Linux 19584

                    
                      9f2af3711cc698027f451721692d4ad7c6bf425f:2024-09-09:36138
                    
                

Test fail (2/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.57
111 TestFunctional/parallel/License 0.22
x
+
TestAddons/parallel/Registry (72.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.128345ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-g5pxq" [19051672-048d-4f1c-8814-35c5fa1de42e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00371888s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dsb8t" [ad21aca5-affd-4e4b-9d2e-487316ad11de] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003386371s
addons_test.go:342: (dbg) Run:  kubectl --context addons-271785 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-271785 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-271785 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.077144424s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-271785 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 ip
2024/09/09 10:57:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-271785
helpers_test.go:235: (dbg) docker inspect addons-271785:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056",
	        "Created": "2024-09-09T10:44:36.919665652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-09T10:44:37.044870401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aeed0e1d4642008f872cbedd0f6935323c1e533683c40e800e0b01d063d11a3c",
	        "ResolvConfPath": "/var/lib/docker/containers/b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056/hostname",
	        "HostsPath": "/var/lib/docker/containers/b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056/hosts",
	        "LogPath": "/var/lib/docker/containers/b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056/b659c935a47ab9c53599c159c1feca0b7872c0f8abb12423a7213e56febf5056-json.log",
	        "Name": "/addons-271785",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-271785:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-271785",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/217e89f4433d7afec33f368664053c79375ecf7507fa6b06996d74c1c2f6f18a-init/diff:/var/lib/docker/overlay2/f89feb9d9bf85ad5dca6b2eeccfb67947d4725a0c38e64ceddf079e267f149b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/217e89f4433d7afec33f368664053c79375ecf7507fa6b06996d74c1c2f6f18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/217e89f4433d7afec33f368664053c79375ecf7507fa6b06996d74c1c2f6f18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/217e89f4433d7afec33f368664053c79375ecf7507fa6b06996d74c1c2f6f18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-271785",
	                "Source": "/var/lib/docker/volumes/addons-271785/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-271785",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-271785",
	                "name.minikube.sigs.k8s.io": "addons-271785",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6027eb71cbd8578dbef8bcb98d2c45e8dbbf60d2f25a52e060bfe44c5641404",
	            "SandboxKey": "/var/run/docker/netns/a6027eb71cbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-271785": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d8e492975c4903b17a21a80a8562f31fedb32d6ca1ad1a358d25444bc4d69f7e",
	                    "EndpointID": "8b089c9e663cf0cf10f18cb325596a6b178ff14e0e599ffc197847e8aa4357b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-271785",
	                        "b659c935a47a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-271785 -n addons-271785
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-180363                                                                   | download-docker-180363 | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC | 09 Sep 24 10:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-876583   | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC |                     |
	|         | binary-mirror-876583                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43487                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-876583                                                                     | binary-mirror-876583   | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC | 09 Sep 24 10:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC |                     |
	|         | addons-271785                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC |                     |
	|         | addons-271785                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-271785 --wait=true                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC | 09 Sep 24 10:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:48 UTC | 09 Sep 24 10:48 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-271785 addons                                                                        | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | -p addons-271785                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | addons-271785                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | -p addons-271785                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-271785 ssh cat                                                                       | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:56 UTC |
	|         | /opt/local-path-provisioner/pvc-88f0cab2-ac8e-4b40-842d-f0e3d852d155_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:56 UTC | 09 Sep 24 10:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-271785 addons                                                                        | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | addons-271785                                                                               |                        |         |         |                     |                     |
	| addons  | addons-271785 addons                                                                        | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-271785 ssh curl -s                                                                   | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-271785 ip                                                                            | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-271785 ip                                                                            | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	| addons  | addons-271785 addons disable                                                                | addons-271785          | jenkins | v1.34.0 | 09 Sep 24 10:57 UTC | 09 Sep 24 10:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 10:44:15
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 10:44:15.522778   16785 out.go:345] Setting OutFile to fd 1 ...
	I0909 10:44:15.522928   16785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:44:15.522939   16785 out.go:358] Setting ErrFile to fd 2...
	I0909 10:44:15.522945   16785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:44:15.523168   16785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 10:44:15.523819   16785 out.go:352] Setting JSON to false
	I0909 10:44:15.524700   16785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1588,"bootTime":1725877067,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0909 10:44:15.524761   16785 start.go:139] virtualization: kvm guest
	I0909 10:44:15.528000   16785 out.go:177] * [addons-271785] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0909 10:44:15.529352   16785 notify.go:220] Checking for updates...
	I0909 10:44:15.529377   16785 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 10:44:15.530712   16785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 10:44:15.532070   16785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 10:44:15.533468   16785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	I0909 10:44:15.534965   16785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0909 10:44:15.536049   16785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 10:44:15.537224   16785 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 10:44:15.557700   16785 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 10:44:15.557802   16785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:44:15.600757   16785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 10:44:15.591919187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:44:15.600857   16785 docker.go:307] overlay module found
	I0909 10:44:15.602465   16785 out.go:177] * Using the docker driver based on user configuration
	I0909 10:44:15.603516   16785 start.go:297] selected driver: docker
	I0909 10:44:15.603526   16785 start.go:901] validating driver "docker" against <nil>
	I0909 10:44:15.603535   16785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 10:44:15.604258   16785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:44:15.651961   16785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 10:44:15.643582469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:44:15.652122   16785 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 10:44:15.652331   16785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0909 10:44:15.653953   16785 out.go:177] * Using Docker driver with root privileges
	I0909 10:44:15.655232   16785 cni.go:84] Creating CNI manager for ""
	I0909 10:44:15.655255   16785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0909 10:44:15.655268   16785 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0909 10:44:15.655330   16785 start.go:340] cluster config:
	{Name:addons-271785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-271785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I0909 10:44:15.656699   16785 out.go:177] * Starting "addons-271785" primary control-plane node in "addons-271785" cluster
	I0909 10:44:15.657859   16785 cache.go:121] Beginning downloading kic base image for docker with docker
	I0909 10:44:15.658998   16785 out.go:177] * Pulling base image v0.0.45 ...
	I0909 10:44:15.660066   16785 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0909 10:44:15.660092   16785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0909 10:44:15.660103   16785 cache.go:56] Caching tarball of preloaded images
	I0909 10:44:15.660158   16785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 10:44:15.660221   16785 preload.go:172] Found /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0909 10:44:15.660235   16785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0909 10:44:15.660563   16785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/config.json ...
	I0909 10:44:15.660622   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/config.json: {Name:mkb74be3b59a4fdd9e36d3b7a352daaaec2eb359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:15.676210   16785 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 10:44:15.676302   16785 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 10:44:15.676321   16785 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0909 10:44:15.676330   16785 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0909 10:44:15.676337   16785 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0909 10:44:15.676344   16785 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0909 10:44:27.607353   16785 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0909 10:44:27.607391   16785 cache.go:194] Successfully downloaded all kic artifacts
	I0909 10:44:27.607429   16785 start.go:360] acquireMachinesLock for addons-271785: {Name:mk4a4f218a8b2d1a95515c42d18e18c0b87b1ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0909 10:44:27.607521   16785 start.go:364] duration metric: took 73.784µs to acquireMachinesLock for "addons-271785"
	I0909 10:44:27.607542   16785 start.go:93] Provisioning new machine with config: &{Name:addons-271785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-271785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0909 10:44:27.607631   16785 start.go:125] createHost starting for "" (driver="docker")
	I0909 10:44:27.609149   16785 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0909 10:44:27.609346   16785 start.go:159] libmachine.API.Create for "addons-271785" (driver="docker")
	I0909 10:44:27.609379   16785 client.go:168] LocalClient.Create starting
	I0909 10:44:27.609481   16785 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem
	I0909 10:44:27.804430   16785 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/cert.pem
	I0909 10:44:27.953718   16785 cli_runner.go:164] Run: docker network inspect addons-271785 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0909 10:44:27.969011   16785 cli_runner.go:211] docker network inspect addons-271785 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0909 10:44:27.969087   16785 network_create.go:284] running [docker network inspect addons-271785] to gather additional debugging logs...
	I0909 10:44:27.969103   16785 cli_runner.go:164] Run: docker network inspect addons-271785
	W0909 10:44:27.983687   16785 cli_runner.go:211] docker network inspect addons-271785 returned with exit code 1
	I0909 10:44:27.983713   16785 network_create.go:287] error running [docker network inspect addons-271785]: docker network inspect addons-271785: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-271785 not found
	I0909 10:44:27.983723   16785 network_create.go:289] output of [docker network inspect addons-271785]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-271785 not found
	
	** /stderr **
	I0909 10:44:27.983810   16785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0909 10:44:27.998813   16785 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b0a910}
	I0909 10:44:27.998851   16785 network_create.go:124] attempt to create docker network addons-271785 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0909 10:44:27.998887   16785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-271785 addons-271785
	I0909 10:44:28.055451   16785 network_create.go:108] docker network addons-271785 192.168.49.0/24 created
	I0909 10:44:28.055481   16785 kic.go:121] calculated static IP "192.168.49.2" for the "addons-271785" container
	I0909 10:44:28.055535   16785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0909 10:44:28.069959   16785 cli_runner.go:164] Run: docker volume create addons-271785 --label name.minikube.sigs.k8s.io=addons-271785 --label created_by.minikube.sigs.k8s.io=true
	I0909 10:44:28.085741   16785 oci.go:103] Successfully created a docker volume addons-271785
	I0909 10:44:28.085804   16785 cli_runner.go:164] Run: docker run --rm --name addons-271785-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-271785 --entrypoint /usr/bin/test -v addons-271785:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0909 10:44:32.989476   16785 cli_runner.go:217] Completed: docker run --rm --name addons-271785-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-271785 --entrypoint /usr/bin/test -v addons-271785:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (4.903633286s)
	I0909 10:44:32.989506   16785 oci.go:107] Successfully prepared a docker volume addons-271785
	I0909 10:44:32.989522   16785 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0909 10:44:32.989539   16785 kic.go:194] Starting extracting preloaded images to volume ...
	I0909 10:44:32.989583   16785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-271785:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0909 10:44:36.856019   16785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-271785:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (3.866398431s)
	I0909 10:44:36.856050   16785 kic.go:203] duration metric: took 3.866508055s to extract preloaded images to volume ...
	W0909 10:44:36.856177   16785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0909 10:44:36.856287   16785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0909 10:44:36.905606   16785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-271785 --name addons-271785 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-271785 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-271785 --network addons-271785 --ip 192.168.49.2 --volume addons-271785:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0909 10:44:37.214346   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Running}}
	I0909 10:44:37.233687   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:37.251040   16785 cli_runner.go:164] Run: docker exec addons-271785 stat /var/lib/dpkg/alternatives/iptables
	I0909 10:44:37.291640   16785 oci.go:144] the created container "addons-271785" has a running status.
	I0909 10:44:37.291674   16785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa...
	I0909 10:44:37.464178   16785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0909 10:44:37.489507   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:37.508271   16785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0909 10:44:37.508296   16785 kic_runner.go:114] Args: [docker exec --privileged addons-271785 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0909 10:44:37.564428   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:37.585511   16785 machine.go:93] provisionDockerMachine start ...
	I0909 10:44:37.585611   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:37.604185   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:37.604399   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:37.604417   16785 main.go:141] libmachine: About to run SSH command:
	hostname
	I0909 10:44:37.795598   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-271785
	
	I0909 10:44:37.795626   16785 ubuntu.go:169] provisioning hostname "addons-271785"
	I0909 10:44:37.795677   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:37.814070   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:37.814238   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:37.814252   16785 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-271785 && echo "addons-271785" | sudo tee /etc/hostname
	I0909 10:44:37.950451   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-271785
	
	I0909 10:44:37.950517   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:37.967658   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:37.967822   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:37.967839   16785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-271785' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-271785/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-271785' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0909 10:44:38.092180   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0909 10:44:38.092208   16785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19584-8635/.minikube CaCertPath:/home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19584-8635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19584-8635/.minikube}
	I0909 10:44:38.092235   16785 ubuntu.go:177] setting up certificates
	I0909 10:44:38.092246   16785 provision.go:84] configureAuth start
	I0909 10:44:38.092288   16785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-271785
	I0909 10:44:38.108128   16785 provision.go:143] copyHostCerts
	I0909 10:44:38.108207   16785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19584-8635/.minikube/ca.pem (1078 bytes)
	I0909 10:44:38.108326   16785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19584-8635/.minikube/cert.pem (1123 bytes)
	I0909 10:44:38.108406   16785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19584-8635/.minikube/key.pem (1679 bytes)
	I0909 10:44:38.108480   16785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19584-8635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca-key.pem org=jenkins.addons-271785 san=[127.0.0.1 192.168.49.2 addons-271785 localhost minikube]
	I0909 10:44:38.211639   16785 provision.go:177] copyRemoteCerts
	I0909 10:44:38.211704   16785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0909 10:44:38.211751   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:38.227730   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:38.316590   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0909 10:44:38.336464   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0909 10:44:38.356267   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0909 10:44:38.375936   16785 provision.go:87] duration metric: took 283.678399ms to configureAuth
	I0909 10:44:38.375964   16785 ubuntu.go:193] setting minikube options for container-runtime
	I0909 10:44:38.376135   16785 config.go:182] Loaded profile config "addons-271785": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 10:44:38.376188   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:38.392440   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:38.392638   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:38.392654   16785 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0909 10:44:38.512665   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0909 10:44:38.512687   16785 ubuntu.go:71] root file system type: overlay
	I0909 10:44:38.512807   16785 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0909 10:44:38.512856   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:38.529320   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:38.529492   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:38.529548   16785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0909 10:44:38.658095   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0909 10:44:38.658189   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:38.674112   16785 main.go:141] libmachine: Using SSH client type: native
	I0909 10:44:38.674276   16785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0909 10:44:38.674292   16785 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0909 10:44:39.320407   16785 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-09 10:44:38.656005256 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0909 10:44:39.320441   16785 machine.go:96] duration metric: took 1.734902837s to provisionDockerMachine
	I0909 10:44:39.320452   16785 client.go:171] duration metric: took 11.711065768s to LocalClient.Create
	I0909 10:44:39.320466   16785 start.go:167] duration metric: took 11.711120564s to libmachine.API.Create "addons-271785"
	I0909 10:44:39.320473   16785 start.go:293] postStartSetup for "addons-271785" (driver="docker")
	I0909 10:44:39.320483   16785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0909 10:44:39.320532   16785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0909 10:44:39.320588   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:39.336514   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:39.424662   16785 ssh_runner.go:195] Run: cat /etc/os-release
	I0909 10:44:39.427344   16785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0909 10:44:39.427373   16785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0909 10:44:39.427392   16785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0909 10:44:39.427406   16785 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0909 10:44:39.427422   16785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19584-8635/.minikube/addons for local assets ...
	I0909 10:44:39.427475   16785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19584-8635/.minikube/files for local assets ...
	I0909 10:44:39.427505   16785 start.go:296] duration metric: took 107.02607ms for postStartSetup
	I0909 10:44:39.427818   16785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-271785
	I0909 10:44:39.443689   16785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/config.json ...
	I0909 10:44:39.443913   16785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 10:44:39.443950   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:39.459207   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:39.545012   16785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0909 10:44:39.548798   16785 start.go:128] duration metric: took 11.941155404s to createHost
	I0909 10:44:39.548821   16785 start.go:83] releasing machines lock for "addons-271785", held for 11.941289683s
	I0909 10:44:39.548880   16785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-271785
	I0909 10:44:39.564556   16785 ssh_runner.go:195] Run: cat /version.json
	I0909 10:44:39.564628   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:39.564637   16785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0909 10:44:39.564702   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:39.580503   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:39.581907   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:39.737240   16785 ssh_runner.go:195] Run: systemctl --version
	I0909 10:44:39.741033   16785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0909 10:44:39.744673   16785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0909 10:44:39.764905   16785 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0909 10:44:39.764961   16785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0909 10:44:39.787509   16785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0909 10:44:39.787530   16785 start.go:495] detecting cgroup driver to use...
	I0909 10:44:39.787561   16785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0909 10:44:39.787677   16785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0909 10:44:39.800966   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0909 10:44:39.809035   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0909 10:44:39.817030   16785 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0909 10:44:39.817101   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0909 10:44:39.825116   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0909 10:44:39.832936   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0909 10:44:39.840510   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0909 10:44:39.848273   16785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0909 10:44:39.856093   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0909 10:44:39.863763   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0909 10:44:39.871650   16785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0909 10:44:39.879305   16785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0909 10:44:39.885942   16785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0909 10:44:39.892624   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:39.970045   16785 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0909 10:44:40.057535   16785 start.go:495] detecting cgroup driver to use...
	I0909 10:44:40.057583   16785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0909 10:44:40.057629   16785 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0909 10:44:40.068071   16785 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0909 10:44:40.068139   16785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0909 10:44:40.078368   16785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0909 10:44:40.093454   16785 ssh_runner.go:195] Run: which cri-dockerd
	I0909 10:44:40.096492   16785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0909 10:44:40.104595   16785 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0909 10:44:40.119752   16785 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0909 10:44:40.196549   16785 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0909 10:44:40.288130   16785 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0909 10:44:40.288238   16785 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0909 10:44:40.303509   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:40.381331   16785 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0909 10:44:40.618626   16785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0909 10:44:40.628678   16785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0909 10:44:40.638151   16785 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0909 10:44:40.709246   16785 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0909 10:44:40.782286   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:40.854128   16785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0909 10:44:40.865333   16785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0909 10:44:40.874355   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:40.954372   16785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0909 10:44:41.009242   16785 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0909 10:44:41.009313   16785 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0909 10:44:41.012671   16785 start.go:563] Will wait 60s for crictl version
	I0909 10:44:41.012722   16785 ssh_runner.go:195] Run: which crictl
	I0909 10:44:41.015492   16785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0909 10:44:41.044526   16785 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0909 10:44:41.044598   16785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0909 10:44:41.066358   16785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0909 10:44:41.090602   16785 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0909 10:44:41.090671   16785 cli_runner.go:164] Run: docker network inspect addons-271785 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0909 10:44:41.105742   16785 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0909 10:44:41.108999   16785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0909 10:44:41.118405   16785 kubeadm.go:883] updating cluster {Name:addons-271785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-271785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0909 10:44:41.118501   16785 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0909 10:44:41.118537   16785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0909 10:44:41.135979   16785 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0909 10:44:41.135999   16785 docker.go:615] Images already preloaded, skipping extraction
	I0909 10:44:41.136049   16785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0909 10:44:41.152249   16785 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0909 10:44:41.152269   16785 cache_images.go:84] Images are preloaded, skipping loading
	I0909 10:44:41.152289   16785 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0909 10:44:41.152393   16785 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-271785 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-271785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0909 10:44:41.152438   16785 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0909 10:44:41.193174   16785 cni.go:84] Creating CNI manager for ""
	I0909 10:44:41.193196   16785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0909 10:44:41.193217   16785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0909 10:44:41.193235   16785 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-271785 NodeName:addons-271785 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0909 10:44:41.193352   16785 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-271785"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0909 10:44:41.193427   16785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0909 10:44:41.200967   16785 binaries.go:44] Found k8s binaries, skipping transfer
	I0909 10:44:41.201021   16785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0909 10:44:41.208160   16785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0909 10:44:41.222628   16785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0909 10:44:41.236674   16785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0909 10:44:41.251378   16785 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0909 10:44:41.254225   16785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0909 10:44:41.263243   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:41.335521   16785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0909 10:44:41.347198   16785 certs.go:68] Setting up /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785 for IP: 192.168.49.2
	I0909 10:44:41.347217   16785 certs.go:194] generating shared ca certs ...
	I0909 10:44:41.347231   16785 certs.go:226] acquiring lock for ca certs: {Name:mk2360cf7fa1bb5fb294939d08b9d4b496d4efcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.347348   16785 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19584-8635/.minikube/ca.key
	I0909 10:44:41.512972   16785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt ...
	I0909 10:44:41.512998   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt: {Name:mk88d0f9b2cb901083f6a448fea938fdbaa0d8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.513150   16785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-8635/.minikube/ca.key ...
	I0909 10:44:41.513160   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/ca.key: {Name:mk875f3e16cd09eaea01032711ec3f531dc6b622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.513227   16785 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.key
	I0909 10:44:41.811128   16785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.crt ...
	I0909 10:44:41.811160   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.crt: {Name:mk737528587ea1f5caaab01eb78f2719c027692b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.811340   16785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.key ...
	I0909 10:44:41.811356   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.key: {Name:mkd183719606887aad007d2a4b2c2ccae5de918f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.811459   16785 certs.go:256] generating profile certs ...
	I0909 10:44:41.811530   16785 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.key
	I0909 10:44:41.811548   16785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt with IP's: []
	I0909 10:44:41.926408   16785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt ...
	I0909 10:44:41.926437   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: {Name:mk1c994c64ac3cf5e7745b66bb18fee9265eca18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.926602   16785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.key ...
	I0909 10:44:41.926617   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.key: {Name:mk9196b6a7a2572d0448feb6e65f57c8f27d6c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:41.926734   16785 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key.3e96fd18
	I0909 10:44:41.926758   16785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt.3e96fd18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0909 10:44:42.120731   16785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt.3e96fd18 ...
	I0909 10:44:42.120762   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt.3e96fd18: {Name:mk9ab0260d612b029bcb2ac113e83dab71c3cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:42.120933   16785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key.3e96fd18 ...
	I0909 10:44:42.120952   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key.3e96fd18: {Name:mk27eb5d25c718ddbd0ab42f77eef30523fca64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:42.121049   16785 certs.go:381] copying /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt.3e96fd18 -> /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt
	I0909 10:44:42.121140   16785 certs.go:385] copying /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key.3e96fd18 -> /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key
	I0909 10:44:42.121212   16785 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.key
	I0909 10:44:42.121240   16785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.crt with IP's: []
	I0909 10:44:42.455357   16785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.crt ...
	I0909 10:44:42.455391   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.crt: {Name:mk0a021f98b1c8f3e4e3f39e1d772537089a22cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:42.455581   16785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.key ...
	I0909 10:44:42.455597   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.key: {Name:mka71d079ae74c19ce1899466631684ab3eafe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:42.455796   16785 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca-key.pem (1675 bytes)
	I0909 10:44:42.455840   16785 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/ca.pem (1078 bytes)
	I0909 10:44:42.455873   16785 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/cert.pem (1123 bytes)
	I0909 10:44:42.455909   16785 certs.go:484] found cert: /home/jenkins/minikube-integration/19584-8635/.minikube/certs/key.pem (1679 bytes)
	I0909 10:44:42.456526   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0909 10:44:42.477301   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0909 10:44:42.496876   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0909 10:44:42.516814   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0909 10:44:42.536766   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0909 10:44:42.556378   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0909 10:44:42.575787   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0909 10:44:42.595252   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0909 10:44:42.614331   16785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0909 10:44:42.633911   16785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0909 10:44:42.648089   16785 ssh_runner.go:195] Run: openssl version
	I0909 10:44:42.652658   16785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0909 10:44:42.660254   16785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0909 10:44:42.663120   16785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  9 10:44 /usr/share/ca-certificates/minikubeCA.pem
	I0909 10:44:42.663159   16785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0909 10:44:42.668989   16785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0909 10:44:42.676459   16785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0909 10:44:42.679060   16785 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0909 10:44:42.679107   16785 kubeadm.go:392] StartCluster: {Name:addons-271785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-271785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 10:44:42.679194   16785 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0909 10:44:42.694624   16785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0909 10:44:42.701981   16785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0909 10:44:42.709294   16785 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0909 10:44:42.709340   16785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0909 10:44:42.716623   16785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0909 10:44:42.716653   16785 kubeadm.go:157] found existing configuration files:
	
	I0909 10:44:42.716689   16785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0909 10:44:42.723771   16785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0909 10:44:42.723827   16785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0909 10:44:42.730916   16785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0909 10:44:42.738197   16785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0909 10:44:42.738254   16785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0909 10:44:42.745097   16785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0909 10:44:42.752100   16785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0909 10:44:42.752142   16785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0909 10:44:42.758963   16785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0909 10:44:42.765731   16785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0909 10:44:42.765780   16785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0909 10:44:42.772442   16785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0909 10:44:42.805808   16785 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0909 10:44:42.805867   16785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0909 10:44:42.827012   16785 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0909 10:44:42.827119   16785 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0909 10:44:42.827177   16785 kubeadm.go:310] OS: Linux
	I0909 10:44:42.827258   16785 kubeadm.go:310] CGROUPS_CPU: enabled
	I0909 10:44:42.827307   16785 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0909 10:44:42.827366   16785 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0909 10:44:42.827435   16785 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0909 10:44:42.827503   16785 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0909 10:44:42.827573   16785 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0909 10:44:42.827639   16785 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0909 10:44:42.827735   16785 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0909 10:44:42.827805   16785 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0909 10:44:42.876643   16785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0909 10:44:42.876820   16785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0909 10:44:42.876951   16785 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0909 10:44:42.886858   16785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0909 10:44:42.889705   16785 out.go:235]   - Generating certificates and keys ...
	I0909 10:44:42.889819   16785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0909 10:44:42.889902   16785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0909 10:44:43.020429   16785 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0909 10:44:43.072088   16785 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0909 10:44:43.175075   16785 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0909 10:44:43.259521   16785 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0909 10:44:43.307625   16785 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0909 10:44:43.307761   16785 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-271785 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0909 10:44:43.397470   16785 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0909 10:44:43.397650   16785 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-271785 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0909 10:44:43.483485   16785 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0909 10:44:43.643546   16785 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0909 10:44:43.729533   16785 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0909 10:44:43.729601   16785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0909 10:44:44.073002   16785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0909 10:44:44.209564   16785 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0909 10:44:44.297501   16785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0909 10:44:44.805737   16785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0909 10:44:44.903940   16785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0909 10:44:44.904554   16785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0909 10:44:44.906921   16785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0909 10:44:44.909032   16785 out.go:235]   - Booting up control plane ...
	I0909 10:44:44.909167   16785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0909 10:44:44.909277   16785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0909 10:44:44.909376   16785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0909 10:44:44.917482   16785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0909 10:44:44.922146   16785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0909 10:44:44.922202   16785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0909 10:44:45.004711   16785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0909 10:44:45.004812   16785 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0909 10:44:46.005916   16785 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00128921s
	I0909 10:44:46.006064   16785 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0909 10:44:50.007585   16785 kubeadm.go:310] [api-check] The API server is healthy after 4.001671855s
	I0909 10:44:50.018887   16785 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0909 10:44:50.028984   16785 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0909 10:44:50.044054   16785 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0909 10:44:50.044222   16785 kubeadm.go:310] [mark-control-plane] Marking the node addons-271785 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0909 10:44:50.050781   16785 kubeadm.go:310] [bootstrap-token] Using token: 1fluin.246gyo3umj6g529d
	I0909 10:44:50.051977   16785 out.go:235]   - Configuring RBAC rules ...
	I0909 10:44:50.052097   16785 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0909 10:44:50.054765   16785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0909 10:44:50.059534   16785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0909 10:44:50.061634   16785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0909 10:44:50.063678   16785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0909 10:44:50.066465   16785 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0909 10:44:50.413907   16785 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0909 10:44:50.831506   16785 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0909 10:44:51.413488   16785 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0909 10:44:51.414299   16785 kubeadm.go:310] 
	I0909 10:44:51.414380   16785 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0909 10:44:51.414391   16785 kubeadm.go:310] 
	I0909 10:44:51.414478   16785 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0909 10:44:51.414486   16785 kubeadm.go:310] 
	I0909 10:44:51.414520   16785 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0909 10:44:51.414592   16785 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0909 10:44:51.414660   16785 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0909 10:44:51.414670   16785 kubeadm.go:310] 
	I0909 10:44:51.414768   16785 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0909 10:44:51.414783   16785 kubeadm.go:310] 
	I0909 10:44:51.414852   16785 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0909 10:44:51.414862   16785 kubeadm.go:310] 
	I0909 10:44:51.414930   16785 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0909 10:44:51.415034   16785 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0909 10:44:51.415130   16785 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0909 10:44:51.415138   16785 kubeadm.go:310] 
	I0909 10:44:51.415269   16785 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0909 10:44:51.415400   16785 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0909 10:44:51.415415   16785 kubeadm.go:310] 
	I0909 10:44:51.415553   16785 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1fluin.246gyo3umj6g529d \
	I0909 10:44:51.415703   16785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c376e8c23af1435b6d54a5ad8ad6332034ab7c8b00f2ebf940ca94a37535b9 \
	I0909 10:44:51.415733   16785 kubeadm.go:310] 	--control-plane 
	I0909 10:44:51.415747   16785 kubeadm.go:310] 
	I0909 10:44:51.415867   16785 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0909 10:44:51.415876   16785 kubeadm.go:310] 
	I0909 10:44:51.415990   16785 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1fluin.246gyo3umj6g529d \
	I0909 10:44:51.416141   16785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c376e8c23af1435b6d54a5ad8ad6332034ab7c8b00f2ebf940ca94a37535b9 
	I0909 10:44:51.417590   16785 kubeadm.go:310] W0909 10:44:42.803306    1922 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0909 10:44:51.417856   16785 kubeadm.go:310] W0909 10:44:42.803945    1922 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0909 10:44:51.418041   16785 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0909 10:44:51.418145   16785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0909 10:44:51.418172   16785 cni.go:84] Creating CNI manager for ""
	I0909 10:44:51.418187   16785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0909 10:44:51.419721   16785 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0909 10:44:51.421060   16785 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0909 10:44:51.429203   16785 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0909 10:44:51.443946   16785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0909 10:44:51.444003   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:51.444021   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-271785 minikube.k8s.io/updated_at=2024_09_09T10_44_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cf17d6b4040a54caaa170f92a048a513bb2a2b0d minikube.k8s.io/name=addons-271785 minikube.k8s.io/primary=true
	I0909 10:44:51.522187   16785 ops.go:34] apiserver oom_adj: -16
	I0909 10:44:51.522307   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:52.022942   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:52.522527   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:53.022685   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:53.522784   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:54.022509   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:54.522562   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:55.023055   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:55.522537   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:56.022392   16785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0909 10:44:56.096687   16785 kubeadm.go:1113] duration metric: took 4.652731851s to wait for elevateKubeSystemPrivileges
	I0909 10:44:56.096723   16785 kubeadm.go:394] duration metric: took 13.417618087s to StartCluster
	I0909 10:44:56.096740   16785 settings.go:142] acquiring lock: {Name:mk36f011397e0c600653f6927921ea8dbea2b461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:56.096843   16785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 10:44:56.097243   16785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19584-8635/kubeconfig: {Name:mk440c5bd831c615cd310d0b32ed59bfbea69096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0909 10:44:56.097402   16785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0909 10:44:56.097425   16785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0909 10:44:56.097498   16785 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0909 10:44:56.097570   16785 config.go:182] Loaded profile config "addons-271785": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 10:44:56.097597   16785 addons.go:69] Setting yakd=true in profile "addons-271785"
	I0909 10:44:56.097608   16785 addons.go:69] Setting default-storageclass=true in profile "addons-271785"
	I0909 10:44:56.097624   16785 addons.go:69] Setting metrics-server=true in profile "addons-271785"
	I0909 10:44:56.097630   16785 addons.go:234] Setting addon yakd=true in "addons-271785"
	I0909 10:44:56.097608   16785 addons.go:69] Setting cloud-spanner=true in profile "addons-271785"
	I0909 10:44:56.097631   16785 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-271785"
	I0909 10:44:56.097656   16785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-271785"
	I0909 10:44:56.097665   16785 addons.go:69] Setting ingress-dns=true in profile "addons-271785"
	I0909 10:44:56.097671   16785 addons.go:69] Setting inspektor-gadget=true in profile "addons-271785"
	I0909 10:44:56.097684   16785 addons.go:234] Setting addon ingress-dns=true in "addons-271785"
	I0909 10:44:56.097689   16785 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-271785"
	I0909 10:44:56.097696   16785 addons.go:234] Setting addon inspektor-gadget=true in "addons-271785"
	I0909 10:44:56.097700   16785 addons.go:69] Setting volcano=true in profile "addons-271785"
	I0909 10:44:56.097712   16785 addons.go:69] Setting helm-tiller=true in profile "addons-271785"
	I0909 10:44:56.097717   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097717   16785 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-271785"
	I0909 10:44:56.097725   16785 addons.go:69] Setting volumesnapshots=true in profile "addons-271785"
	I0909 10:44:56.097672   16785 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-271785"
	I0909 10:44:56.097736   16785 addons.go:69] Setting storage-provisioner=true in profile "addons-271785"
	I0909 10:44:56.097742   16785 addons.go:234] Setting addon volumesnapshots=true in "addons-271785"
	I0909 10:44:56.097747   16785 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-271785"
	I0909 10:44:56.097749   16785 addons.go:234] Setting addon helm-tiller=true in "addons-271785"
	I0909 10:44:56.097759   16785 addons.go:234] Setting addon storage-provisioner=true in "addons-271785"
	I0909 10:44:56.097765   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097777   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097690   16785 addons.go:69] Setting registry=true in profile "addons-271785"
	I0909 10:44:56.097787   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097800   16785 addons.go:234] Setting addon registry=true in "addons-271785"
	I0909 10:44:56.097819   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.098037   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098045   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098193   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098207   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098217   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098236   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.098247   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.097779   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097662   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097647   16785 addons.go:234] Setting addon metrics-server=true in "addons-271785"
	I0909 10:44:56.099118   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097675   16785 addons.go:234] Setting addon cloud-spanner=true in "addons-271785"
	I0909 10:44:56.099351   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097717   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.099604   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.099641   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.099921   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.099946   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.100466   16785 out.go:177] * Verifying Kubernetes components...
	I0909 10:44:56.097723   16785 addons.go:234] Setting addon volcano=true in "addons-271785"
	I0909 10:44:56.100796   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.097702   16785 addons.go:69] Setting gcp-auth=true in profile "addons-271785"
	I0909 10:44:56.097715   16785 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-271785"
	I0909 10:44:56.097675   16785 addons.go:69] Setting ingress=true in profile "addons-271785"
	I0909 10:44:56.099260   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.102107   16785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0909 10:44:56.105048   16785 mustload.go:65] Loading cluster: addons-271785
	I0909 10:44:56.105305   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.105491   16785 addons.go:234] Setting addon ingress=true in "addons-271785"
	I0909 10:44:56.105561   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.106133   16785 config.go:182] Loaded profile config "addons-271785": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 10:44:56.107318   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.107566   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.107679   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.119337   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.135532   16785 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0909 10:44:56.137780   16785 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0909 10:44:56.137823   16785 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0909 10:44:56.137913   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.148174   16785 out.go:177]   - Using image docker.io/registry:2.8.3
	I0909 10:44:56.149442   16785 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0909 10:44:56.149572   16785 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0909 10:44:56.150053   16785 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0909 10:44:56.150516   16785 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0909 10:44:56.150535   16785 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0909 10:44:56.150599   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.151266   16785 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0909 10:44:56.151330   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0909 10:44:56.151432   16785 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0909 10:44:56.151442   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0909 10:44:56.151489   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.151716   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.155046   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0909 10:44:56.156307   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0909 10:44:56.156328   16785 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0909 10:44:56.156375   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.159008   16785 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0909 10:44:56.159975   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0909 10:44:56.161063   16785 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0909 10:44:56.161067   16785 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0909 10:44:56.162356   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0909 10:44:56.162444   16785 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0909 10:44:56.162567   16785 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0909 10:44:56.162595   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0909 10:44:56.162666   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.165086   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0909 10:44:56.165144   16785 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0909 10:44:56.165261   16785 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0909 10:44:56.165277   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0909 10:44:56.165327   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.166303   16785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0909 10:44:56.166322   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0909 10:44:56.166377   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.167296   16785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0909 10:44:56.167362   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0909 10:44:56.168406   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0909 10:44:56.168427   16785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0909 10:44:56.168442   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0909 10:44:56.168491   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.171309   16785 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-271785"
	I0909 10:44:56.171350   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.171647   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.171864   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.173452   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0909 10:44:56.174550   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0909 10:44:56.175639   16785 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0909 10:44:56.176657   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0909 10:44:56.176679   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0909 10:44:56.176732   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.187969   16785 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0909 10:44:56.194320   16785 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0909 10:44:56.194342   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0909 10:44:56.194390   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.194514   16785 addons.go:234] Setting addon default-storageclass=true in "addons-271785"
	I0909 10:44:56.194551   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:44:56.194979   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:44:56.196717   16785 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0909 10:44:56.197939   16785 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0909 10:44:56.197969   16785 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0909 10:44:56.198021   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.217135   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.219642   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.219632   16785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0909 10:44:56.220451   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.222927   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.224431   16785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 10:44:56.225548   16785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 10:44:56.226842   16785 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0909 10:44:56.226859   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0909 10:44:56.226908   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.231541   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.232465   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.236275   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.244425   16785 out.go:177]   - Using image docker.io/busybox:stable
	I0909 10:44:56.247212   16785 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0909 10:44:56.248442   16785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0909 10:44:56.248457   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0909 10:44:56.248497   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.248963   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.257239   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.261881   16785 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0909 10:44:56.261902   16785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0909 10:44:56.261952   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:44:56.263778   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.265366   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.270457   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	W0909 10:44:56.272769   16785 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0909 10:44:56.272802   16785 retry.go:31] will retry after 193.188709ms: ssh: handshake failed: EOF
	I0909 10:44:56.279432   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.294769   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.297572   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:44:56.464902   16785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0909 10:44:56.465003   16785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0909 10:44:56.569471   16785 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0909 10:44:56.569560   16785 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0909 10:44:56.650919   16785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0909 10:44:56.650946   16785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0909 10:44:56.658728   16785 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0909 10:44:56.658751   16785 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0909 10:44:56.664305   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0909 10:44:56.673877   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0909 10:44:56.772543   16785 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0909 10:44:56.772581   16785 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0909 10:44:56.853673   16785 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0909 10:44:56.853708   16785 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0909 10:44:56.863202   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0909 10:44:56.868435   16785 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0909 10:44:56.868463   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0909 10:44:56.873626   16785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0909 10:44:56.873648   16785 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0909 10:44:56.955532   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0909 10:44:56.972552   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0909 10:44:56.972589   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0909 10:44:57.049486   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0909 10:44:57.049650   16785 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0909 10:44:57.049699   16785 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0909 10:44:57.050762   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0909 10:44:57.062050   16785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0909 10:44:57.062123   16785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0909 10:44:57.064221   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0909 10:44:57.070389   16785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0909 10:44:57.070457   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0909 10:44:57.071549   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0909 10:44:57.153719   16785 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0909 10:44:57.153807   16785 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0909 10:44:57.169339   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0909 10:44:57.264148   16785 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0909 10:44:57.264177   16785 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0909 10:44:57.352039   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0909 10:44:57.352124   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0909 10:44:57.369475   16785 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0909 10:44:57.369561   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0909 10:44:57.370644   16785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0909 10:44:57.370727   16785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0909 10:44:57.459488   16785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0909 10:44:57.459575   16785 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0909 10:44:57.459864   16785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0909 10:44:57.459913   16785 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0909 10:44:57.857162   16785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0909 10:44:57.857266   16785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0909 10:44:57.863080   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0909 10:44:57.964482   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0909 10:44:57.968810   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0909 10:44:57.968888   16785 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0909 10:44:58.068368   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0909 10:44:58.068461   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0909 10:44:58.268715   16785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0909 10:44:58.268767   16785 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0909 10:44:58.350129   16785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0909 10:44:58.350187   16785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0909 10:44:58.363269   16785 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 10:44:58.363297   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0909 10:44:58.451284   16785 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.986255838s)
	I0909 10:44:58.452243   16785 node_ready.go:35] waiting up to 6m0s for node "addons-271785" to be "Ready" ...
	I0909 10:44:58.452360   16785 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.987426079s)
	I0909 10:44:58.452482   16785 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0909 10:44:58.458791   16785 node_ready.go:49] node "addons-271785" has status "Ready":"True"
	I0909 10:44:58.458859   16785 node_ready.go:38] duration metric: took 6.47437ms for node "addons-271785" to be "Ready" ...
	I0909 10:44:58.458884   16785 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0909 10:44:58.469292   16785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace to be "Ready" ...
	I0909 10:44:58.656111   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0909 10:44:58.656199   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0909 10:44:58.850324   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.185978948s)
	I0909 10:44:58.853736   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0909 10:44:58.958762   16785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-271785" context rescaled to 1 replicas
	I0909 10:44:58.975251   16785 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0909 10:44:58.975312   16785 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0909 10:44:59.257094   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 10:44:59.767300   16785 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0909 10:44:59.767356   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0909 10:44:59.967959   16785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0909 10:44:59.967989   16785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0909 10:45:00.353016   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0909 10:45:00.353044   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0909 10:45:00.354625   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0909 10:45:00.551818   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:00.957500   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0909 10:45:00.957533   16785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0909 10:45:01.668817   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0909 10:45:01.668847   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0909 10:45:02.067015   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0909 10:45:02.067092   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0909 10:45:02.561886   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:02.565848   16785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0909 10:45:02.565929   16785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0909 10:45:02.764624   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0909 10:45:03.257879   16785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0909 10:45:03.258021   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:45:03.281844   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:45:03.949364   16785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0909 10:45:04.168004   16785 addons.go:234] Setting addon gcp-auth=true in "addons-271785"
	I0909 10:45:04.168075   16785 host.go:66] Checking if "addons-271785" exists ...
	I0909 10:45:04.168621   16785 cli_runner.go:164] Run: docker container inspect addons-271785 --format={{.State.Status}}
	I0909 10:45:04.187135   16785 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0909 10:45:04.187177   16785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-271785
	I0909 10:45:04.203022   16785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/addons-271785/id_rsa Username:docker}
	I0909 10:45:05.054295   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:07.151649   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:08.058890   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.384979829s)
	I0909 10:45:08.059119   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.103496825s)
	I0909 10:45:08.059156   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.008345189s)
	I0909 10:45:08.059185   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.009627596s)
	I0909 10:45:08.059292   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.995007178s)
	I0909 10:45:08.060134   16785 addons.go:475] Verifying addon ingress=true in "addons-271785"
	I0909 10:45:08.059337   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.987734542s)
	I0909 10:45:08.059372   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.889946368s)
	I0909 10:45:08.060244   16785 addons.go:475] Verifying addon registry=true in "addons-271785"
	I0909 10:45:08.059412   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.19624253s)
	I0909 10:45:08.059450   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.094939165s)
	I0909 10:45:08.059510   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.205742943s)
	I0909 10:45:08.059638   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.802493605s)
	I0909 10:45:08.060508   16785 addons.go:475] Verifying addon metrics-server=true in "addons-271785"
	W0909 10:45:08.060525   16785 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0909 10:45:08.059711   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.705045657s)
	I0909 10:45:08.060546   16785 retry.go:31] will retry after 276.723913ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0909 10:45:08.059746   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.195799082s)
	I0909 10:45:08.062009   16785 out.go:177] * Verifying registry addon...
	I0909 10:45:08.062029   16785 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-271785 service yakd-dashboard -n yakd-dashboard
	
	I0909 10:45:08.062169   16785 out.go:177] * Verifying ingress addon...
	I0909 10:45:08.064880   16785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0909 10:45:08.064906   16785 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0909 10:45:08.069577   16785 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0909 10:45:08.069599   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:08.069977   16785 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0909 10:45:08.069992   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0909 10:45:08.149372   16785 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0909 10:45:08.337782   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0909 10:45:08.570022   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:08.571124   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:09.070194   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:09.070531   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:09.500087   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:09.553067   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.788391139s)
	I0909 10:45:09.553109   16785 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.365949977s)
	I0909 10:45:09.553104   16785 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-271785"
	I0909 10:45:09.587309   16785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0909 10:45:09.588405   16785 out.go:177] * Verifying csi-hostpath-driver addon...
	I0909 10:45:09.591320   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:09.591908   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:09.593919   16785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0909 10:45:09.594165   16785 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0909 10:45:09.596017   16785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0909 10:45:09.596071   16785 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0909 10:45:09.598729   16785 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0909 10:45:09.598757   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:09.673389   16785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0909 10:45:09.673424   16785 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0909 10:45:09.866459   16785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0909 10:45:09.866485   16785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0909 10:45:09.887782   16785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0909 10:45:10.069658   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:10.071768   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:10.153043   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:10.475226   16785 pod_ready.go:98] pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:45:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-09 10:44:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-09 10:44:58 +0000 UTC,FinishedAt:2024-09-09 10:45:09 +0000 UTC,ContainerID:docker://c95c82b4e7e4c3b4dd355c99b8fcb5924f5cf81d7836e8fc817ddfc75475fdcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://c95c82b4e7e4c3b4dd355c99b8fcb5924f5cf81d7836e8fc817ddfc75475fdcc Started:0xc001aa4ac0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0008c5270} {Name:kube-api-access-4tpwt MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc0008c5280}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0909 10:45:10.475259   16785 pod_ready.go:82] duration metric: took 12.005887334s for pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace to be "Ready" ...
	E0909 10:45:10.475274   16785 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-dffc2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:45:10 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-09 10:44:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-09 10:44:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-09 10:44:58 +0000 UTC,FinishedAt:2024-09-09 10:45:09 +0000 UTC,ContainerID:docker://c95c82b4e7e4c3b4dd355c99b8fcb5924f5cf81d7836e8fc817ddfc75475fdcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://c95c82b4e7e4c3b4dd355c99b8fcb5924f5cf81d7836e8fc817ddfc75475fdcc Started:0xc001aa4ac0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0008c5270} {Name:kube-api-access-4tpwt MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0008c5280}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0909 10:45:10.475283   16785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:10.570120   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:10.570816   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:10.652947   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:10.872712   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.534881175s)
	I0909 10:45:11.069593   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:11.070556   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:11.170447   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:11.255517   16785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.367690263s)
	I0909 10:45:11.257583   16785 addons.go:475] Verifying addon gcp-auth=true in "addons-271785"
	I0909 10:45:11.259250   16785 out.go:177] * Verifying gcp-auth addon...
	I0909 10:45:11.261237   16785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0909 10:45:11.269048   16785 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0909 10:45:11.568216   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:11.569226   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:11.598062   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:12.068954   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:12.069304   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:12.098181   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:12.481144   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:12.569449   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:12.569772   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:12.597732   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:13.068546   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:13.069281   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:13.098615   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:13.569138   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:13.569576   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:13.598415   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:14.068907   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:14.069285   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:14.097672   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:14.568747   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:14.569058   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:14.677426   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:15.067250   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:15.067870   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:15.068159   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:15.097979   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:15.568461   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:15.568908   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:15.597471   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:16.068318   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:16.068868   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:16.097380   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:16.568872   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:16.569385   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:16.597998   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:17.068381   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:17.069040   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:17.097521   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:17.480319   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:17.568488   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:17.569057   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:17.597621   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:18.068319   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:18.068713   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:18.097677   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:18.568657   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:18.569184   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:18.598661   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:19.068899   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:19.069162   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:19.098098   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:19.480555   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:19.568315   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:19.568752   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:19.599198   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:20.068439   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:20.068916   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:20.097566   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:20.568421   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:20.569016   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:20.597468   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:21.069238   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:21.069462   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:21.098398   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:21.569484   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:21.570462   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:21.597994   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:21.981327   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:22.068879   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:22.069348   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:22.098390   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:22.568374   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:22.568826   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:22.599180   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:23.069048   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:23.069175   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:23.098405   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:23.569492   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:23.570007   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:23.598416   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:23.981356   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:24.068373   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:24.068890   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:24.098476   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:24.569101   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:24.569593   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:24.598757   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:25.068106   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:25.069113   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:25.098067   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:25.569023   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:25.569569   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:25.598522   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:26.140822   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:26.141092   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:26.141247   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:26.480628   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:26.569004   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:26.569075   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:26.598248   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:27.068346   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:27.068687   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:27.098428   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:27.568935   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:27.569196   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:27.599510   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:28.069081   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:28.069260   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:28.170449   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:28.567983   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:28.568397   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:28.598119   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:28.980886   16785 pod_ready.go:103] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"False"
	I0909 10:45:29.069007   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:29.069201   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:29.098397   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:29.568666   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:29.568925   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:29.597581   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:30.068463   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:30.068756   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:30.098225   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:30.568932   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:30.569134   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:30.597670   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:30.981655   16785 pod_ready.go:93] pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:30.981681   16785 pod_ready.go:82] duration metric: took 20.506385633s for pod "coredns-6f6b679f8f-xc4cc" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.981699   16785 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.986343   16785 pod_ready.go:93] pod "etcd-addons-271785" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:30.986364   16785 pod_ready.go:82] duration metric: took 4.657734ms for pod "etcd-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.986375   16785 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.990565   16785 pod_ready.go:93] pod "kube-apiserver-addons-271785" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:30.990587   16785 pod_ready.go:82] duration metric: took 4.204614ms for pod "kube-apiserver-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.990597   16785 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.994770   16785 pod_ready.go:93] pod "kube-controller-manager-addons-271785" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:30.994787   16785 pod_ready.go:82] duration metric: took 4.184349ms for pod "kube-controller-manager-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.994795   16785 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qw8w" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.998855   16785 pod_ready.go:93] pod "kube-proxy-2qw8w" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:30.998876   16785 pod_ready.go:82] duration metric: took 4.074243ms for pod "kube-proxy-2qw8w" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:30.998887   16785 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:31.069763   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:31.070165   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:31.098482   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:31.379119   16785 pod_ready.go:93] pod "kube-scheduler-addons-271785" in "kube-system" namespace has status "Ready":"True"
	I0909 10:45:31.379146   16785 pod_ready.go:82] duration metric: took 380.250233ms for pod "kube-scheduler-addons-271785" in "kube-system" namespace to be "Ready" ...
	I0909 10:45:31.379159   16785 pod_ready.go:39] duration metric: took 32.92025059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0909 10:45:31.379186   16785 api_server.go:52] waiting for apiserver process to appear ...
	I0909 10:45:31.379244   16785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 10:45:31.395676   16785 api_server.go:72] duration metric: took 35.298217523s to wait for apiserver process to appear ...
	I0909 10:45:31.395708   16785 api_server.go:88] waiting for apiserver healthz status ...
	I0909 10:45:31.395728   16785 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0909 10:45:31.399984   16785 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0909 10:45:31.400942   16785 api_server.go:141] control plane version: v1.31.0
	I0909 10:45:31.400965   16785 api_server.go:131] duration metric: took 5.250142ms to wait for apiserver health ...
	I0909 10:45:31.400975   16785 system_pods.go:43] waiting for kube-system pods to appear ...
	I0909 10:45:31.568794   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:31.569391   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:31.585775   16785 system_pods.go:59] 18 kube-system pods found
	I0909 10:45:31.585805   16785 system_pods.go:61] "coredns-6f6b679f8f-xc4cc" [4a149e0a-43a5-44e7-b91e-439700db0ec3] Running
	I0909 10:45:31.585817   16785 system_pods.go:61] "csi-hostpath-attacher-0" [c667af28-28dd-41e6-86a9-6311794cfe78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0909 10:45:31.585826   16785 system_pods.go:61] "csi-hostpath-resizer-0" [a3991825-0f50-4d74-8f14-07c871058034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0909 10:45:31.585840   16785 system_pods.go:61] "csi-hostpathplugin-kn528" [9bf84cba-aca0-46f3-827f-73bd8b182cf1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0909 10:45:31.585853   16785 system_pods.go:61] "etcd-addons-271785" [b71718eb-054a-4734-93f9-b77e30d49ec5] Running
	I0909 10:45:31.585860   16785 system_pods.go:61] "kube-apiserver-addons-271785" [6c1744ba-36d7-4393-80e0-88725d0386ef] Running
	I0909 10:45:31.585866   16785 system_pods.go:61] "kube-controller-manager-addons-271785" [6e95f0ee-dfdd-489d-9662-404dafe6b803] Running
	I0909 10:45:31.585878   16785 system_pods.go:61] "kube-ingress-dns-minikube" [cadce7d8-dd7a-44de-a164-c2fba6ede595] Running
	I0909 10:45:31.585883   16785 system_pods.go:61] "kube-proxy-2qw8w" [98ba3c1f-cf11-47aa-9e7d-393934752a66] Running
	I0909 10:45:31.585890   16785 system_pods.go:61] "kube-scheduler-addons-271785" [8265d138-c7ec-44cc-907f-5fab14c8d119] Running
	I0909 10:45:31.585901   16785 system_pods.go:61] "metrics-server-84c5f94fbc-jmhrl" [b5e8b788-3dd6-4f12-ad84-911eedfe943d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0909 10:45:31.585909   16785 system_pods.go:61] "nvidia-device-plugin-daemonset-tdngv" [e84a8fea-2be6-41b4-a429-3b434f6fcb8a] Running
	I0909 10:45:31.585915   16785 system_pods.go:61] "registry-6fb4cdfc84-g5pxq" [19051672-048d-4f1c-8814-35c5fa1de42e] Running
	I0909 10:45:31.585926   16785 system_pods.go:61] "registry-proxy-dsb8t" [ad21aca5-affd-4e4b-9d2e-487316ad11de] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0909 10:45:31.585934   16785 system_pods.go:61] "snapshot-controller-56fcc65765-jktkk" [db496d37-be6c-49d8-9f2c-71c3766e673f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 10:45:31.585945   16785 system_pods.go:61] "snapshot-controller-56fcc65765-sgwq2" [e148e085-17d3-492b-81a7-a24dc305ea28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 10:45:31.585951   16785 system_pods.go:61] "storage-provisioner" [e0811adb-b379-4e54-86e2-91d05060bc58] Running
	I0909 10:45:31.585962   16785 system_pods.go:61] "tiller-deploy-b48cc5f79-jsllm" [e269fd50-e1f6-4fe0-a330-8302e46d81af] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0909 10:45:31.585970   16785 system_pods.go:74] duration metric: took 184.988588ms to wait for pod list to return data ...
	I0909 10:45:31.585982   16785 default_sa.go:34] waiting for default service account to be created ...
	I0909 10:45:31.598130   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:31.778967   16785 default_sa.go:45] found service account: "default"
	I0909 10:45:31.778991   16785 default_sa.go:55] duration metric: took 193.00035ms for default service account to be created ...
	I0909 10:45:31.779002   16785 system_pods.go:116] waiting for k8s-apps to be running ...
	I0909 10:45:31.986306   16785 system_pods.go:86] 18 kube-system pods found
	I0909 10:45:31.986337   16785 system_pods.go:89] "coredns-6f6b679f8f-xc4cc" [4a149e0a-43a5-44e7-b91e-439700db0ec3] Running
	I0909 10:45:31.986351   16785 system_pods.go:89] "csi-hostpath-attacher-0" [c667af28-28dd-41e6-86a9-6311794cfe78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0909 10:45:31.986360   16785 system_pods.go:89] "csi-hostpath-resizer-0" [a3991825-0f50-4d74-8f14-07c871058034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0909 10:45:31.986370   16785 system_pods.go:89] "csi-hostpathplugin-kn528" [9bf84cba-aca0-46f3-827f-73bd8b182cf1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0909 10:45:31.986377   16785 system_pods.go:89] "etcd-addons-271785" [b71718eb-054a-4734-93f9-b77e30d49ec5] Running
	I0909 10:45:31.986383   16785 system_pods.go:89] "kube-apiserver-addons-271785" [6c1744ba-36d7-4393-80e0-88725d0386ef] Running
	I0909 10:45:31.986389   16785 system_pods.go:89] "kube-controller-manager-addons-271785" [6e95f0ee-dfdd-489d-9662-404dafe6b803] Running
	I0909 10:45:31.986397   16785 system_pods.go:89] "kube-ingress-dns-minikube" [cadce7d8-dd7a-44de-a164-c2fba6ede595] Running
	I0909 10:45:31.986402   16785 system_pods.go:89] "kube-proxy-2qw8w" [98ba3c1f-cf11-47aa-9e7d-393934752a66] Running
	I0909 10:45:31.986412   16785 system_pods.go:89] "kube-scheduler-addons-271785" [8265d138-c7ec-44cc-907f-5fab14c8d119] Running
	I0909 10:45:31.986420   16785 system_pods.go:89] "metrics-server-84c5f94fbc-jmhrl" [b5e8b788-3dd6-4f12-ad84-911eedfe943d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0909 10:45:31.986431   16785 system_pods.go:89] "nvidia-device-plugin-daemonset-tdngv" [e84a8fea-2be6-41b4-a429-3b434f6fcb8a] Running
	I0909 10:45:31.986437   16785 system_pods.go:89] "registry-6fb4cdfc84-g5pxq" [19051672-048d-4f1c-8814-35c5fa1de42e] Running
	I0909 10:45:31.986450   16785 system_pods.go:89] "registry-proxy-dsb8t" [ad21aca5-affd-4e4b-9d2e-487316ad11de] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0909 10:45:31.986462   16785 system_pods.go:89] "snapshot-controller-56fcc65765-jktkk" [db496d37-be6c-49d8-9f2c-71c3766e673f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 10:45:31.986474   16785 system_pods.go:89] "snapshot-controller-56fcc65765-sgwq2" [e148e085-17d3-492b-81a7-a24dc305ea28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0909 10:45:31.986485   16785 system_pods.go:89] "storage-provisioner" [e0811adb-b379-4e54-86e2-91d05060bc58] Running
	I0909 10:45:31.986493   16785 system_pods.go:89] "tiller-deploy-b48cc5f79-jsllm" [e269fd50-e1f6-4fe0-a330-8302e46d81af] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0909 10:45:31.986504   16785 system_pods.go:126] duration metric: took 207.495896ms to wait for k8s-apps to be running ...
	I0909 10:45:31.986517   16785 system_svc.go:44] waiting for kubelet service to be running ....
	I0909 10:45:31.986565   16785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 10:45:32.000045   16785 system_svc.go:56] duration metric: took 13.513404ms WaitForService to wait for kubelet
	I0909 10:45:32.000078   16785 kubeadm.go:582] duration metric: took 35.902621946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0909 10:45:32.000102   16785 node_conditions.go:102] verifying NodePressure condition ...
	I0909 10:45:32.068837   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:32.069122   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:32.097653   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:32.180108   16785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0909 10:45:32.180137   16785 node_conditions.go:123] node cpu capacity is 8
	I0909 10:45:32.180152   16785 node_conditions.go:105] duration metric: took 180.043957ms to run NodePressure ...
	I0909 10:45:32.180166   16785 start.go:241] waiting for startup goroutines ...
	I0909 10:45:32.569106   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:32.569212   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:32.598799   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:33.068861   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:33.069345   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:33.098542   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:33.568860   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:33.569216   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:33.597991   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:34.068196   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:34.069237   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:34.098415   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:34.570243   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0909 10:45:34.570429   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:34.598413   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:35.068293   16785 kapi.go:107] duration metric: took 27.003409174s to wait for kubernetes.io/minikube-addons=registry ...
	I0909 10:45:35.068955   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:35.098143   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:35.569935   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:35.598094   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:36.069210   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:36.098102   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:36.568874   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:36.597882   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:37.069042   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:37.098907   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:37.570098   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:37.598444   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:38.070137   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:38.171353   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:38.569058   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:38.598485   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:39.069114   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:39.098333   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:39.568745   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:39.597755   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:40.069265   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:40.098530   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:40.569480   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:40.599113   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:41.069216   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:41.098320   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:41.570016   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:41.598337   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:42.070078   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:42.098298   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:42.569743   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:42.599051   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:43.068988   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:43.097654   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:43.570006   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:43.597823   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:44.068831   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:44.098641   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:44.569562   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:44.598300   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:45.069579   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:45.098795   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:45.570419   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:45.598665   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:46.070110   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:46.098378   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:46.569338   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:46.598265   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:47.069168   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:47.098544   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:47.569848   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:47.597792   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:48.069550   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:48.153266   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:48.570271   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:48.598401   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:49.069711   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:49.098524   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:49.569264   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:49.597890   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:50.068738   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:50.097412   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:50.568935   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:50.597730   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:51.069122   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:51.098362   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:51.569762   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:51.598217   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:52.070023   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:52.098193   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:52.569210   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:52.597814   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:53.069453   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:53.098075   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:53.568361   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:53.598364   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:54.069194   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:54.098292   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:54.570206   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:54.597987   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:55.068705   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:55.097479   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:55.570400   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:55.597558   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:56.069126   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:56.097693   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:56.570393   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:56.598485   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:57.069931   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:57.098501   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:57.569627   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:57.670843   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:58.070053   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:58.098905   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:58.570841   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:58.598753   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:59.070194   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:59.098269   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:45:59.568385   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:45:59.598462   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:00.070267   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:00.098490   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:00.569450   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:00.598257   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:01.069775   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:01.098503   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:01.569321   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:01.598101   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:02.069957   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:02.152948   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:02.570307   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:02.598418   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:03.069426   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:03.098554   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:03.568931   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:03.602718   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:04.070177   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:04.097749   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:04.569892   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:04.598278   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:05.068745   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:05.097420   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:05.570025   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:05.598022   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0909 10:46:06.069289   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:06.098115   16785 kapi.go:107] duration metric: took 56.504195187s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0909 10:46:06.569028   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:07.068940   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:07.569037   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:08.069164   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:08.568812   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:09.068529   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:09.569686   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:10.069462   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:10.569484   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:11.069753   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:11.569621   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:12.069938   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:12.569362   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:13.069282   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:13.569457   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:14.069557   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:14.568891   16785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0909 10:46:15.068722   16785 kapi.go:107] duration metric: took 1m7.003811979s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0909 10:46:34.764715   16785 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0909 10:46:34.764737   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:35.264623   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:35.765063   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:36.265024   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:36.765326   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:37.263940   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:37.765056   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:38.264919   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:38.765380   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:39.263959   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:39.764645   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:40.264048   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:40.764003   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:41.264695   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:41.764524   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:42.264301   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:42.763917   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:43.264626   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:43.764531   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:44.264004   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:44.765013   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:45.264470   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:45.764832   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:46.264720   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:46.764608   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:47.264039   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:47.765979   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:48.264331   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:48.764510   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:49.264142   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:49.765107   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:50.264837   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:50.764997   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:51.263973   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:51.765001   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:52.264348   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:52.764270   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:53.263934   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:53.765079   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:54.264846   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:54.764668   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:55.264196   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:55.764306   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:56.264185   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:56.764023   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:57.264374   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:57.764296   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:58.263822   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:58.764319   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:59.263979   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:46:59.765125   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:00.263754   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:00.764896   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:01.264367   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:01.764250   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:02.263817   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:02.764161   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:03.263896   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:03.764414   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:04.263992   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:04.765211   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:05.264032   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:05.764823   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:06.264921   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:06.764588   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:07.264078   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:07.765598   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:08.264027   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:08.764867   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:09.264096   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:09.764942   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:10.264659   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:10.764505   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:11.264143   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:11.764099   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:12.263463   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:12.764418   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:13.264034   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:13.765355   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:14.263573   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:14.764865   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:15.264369   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:15.764402   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:16.264183   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:16.763843   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:17.263969   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:17.764862   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:18.264324   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:18.764032   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:19.264689   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:19.764552   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:20.264334   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:20.764136   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:21.264345   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:21.764503   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:22.264641   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:22.764208   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:23.263703   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:23.764721   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:24.264397   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:24.764617   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:25.263959   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:25.764109   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:26.263865   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:26.764111   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:27.263650   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:27.764212   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:28.264893   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:28.764254   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:29.263987   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:29.764960   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:30.264790   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:30.764531   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:31.264904   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:31.764459   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:32.264032   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:32.764975   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:33.263910   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:33.764315   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:34.263926   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:34.765149   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:35.263488   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:35.764531   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:36.264404   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:36.764129   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:37.264719   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:37.764143   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:38.263974   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:38.764639   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:39.264669   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:39.764482   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:40.264144   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:40.763818   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:41.263845   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:41.764343   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:42.264030   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:42.764875   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:43.264379   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:43.764356   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:44.264193   16785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0909 10:47:44.764106   16785 kapi.go:107] duration metric: took 2m33.502869802s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0909 10:47:44.765684   16785 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-271785 cluster.
	I0909 10:47:44.766841   16785 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0909 10:47:44.767966   16785 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0909 10:47:44.769336   16785 out.go:177] * Enabled addons: cloud-spanner, volcano, storage-provisioner, nvidia-device-plugin, ingress-dns, helm-tiller, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0909 10:47:44.770607   16785 addons.go:510] duration metric: took 2m48.673107629s for enable addons: enabled=[cloud-spanner volcano storage-provisioner nvidia-device-plugin ingress-dns helm-tiller metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0909 10:47:44.770654   16785 start.go:246] waiting for cluster config update ...
	I0909 10:47:44.770678   16785 start.go:255] writing updated cluster config ...
	I0909 10:47:44.770936   16785 ssh_runner.go:195] Run: rm -f paused
	I0909 10:47:44.818524   16785 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0909 10:47:44.820215   16785 out.go:177] * Done! kubectl is now configured to use "addons-271785" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 09 10:57:18 addons-271785 dockerd[1341]: time="2024-09-09T10:57:18.266392114Z" level=info msg="ignoring event" container=0ed45814ba1d2db508cb0e68be7078b2b52bf43533472832aa41e6e63c82fa9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:18 addons-271785 dockerd[1341]: time="2024-09-09T10:57:18.269424386Z" level=info msg="ignoring event" container=4a3262d776fe3850b5b316d80bba59e1cb87b600e4ad18bf17da4238c927daae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:18 addons-271785 dockerd[1341]: time="2024-09-09T10:57:18.439076732Z" level=info msg="ignoring event" container=5594a069000a201b004e0f9223cc815dd07988c20dd3af407d5963a34c7f2f0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:18 addons-271785 dockerd[1341]: time="2024-09-09T10:57:18.478857068Z" level=info msg="ignoring event" container=22ce97c7ff82343c85276c9d775dbbe640e8eb1157083d15cc24dc80db1f63a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:19 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cf044f9074b2bc33e433dccd76cd0343d0e4ea9b824e48236f5ecc1bdfd3617/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 09 10:57:22 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:22Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 09 10:57:22 addons-271785 dockerd[1341]: time="2024-09-09T10:57:22.789176529Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 09 10:57:22 addons-271785 dockerd[1341]: time="2024-09-09T10:57:22.791165480Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 09 10:57:25 addons-271785 dockerd[1341]: time="2024-09-09T10:57:25.582067653Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=d2900ca675659c048969b5583fa937c074208babd6254c61f7fde5d2acf2e61a
	Sep 09 10:57:25 addons-271785 dockerd[1341]: time="2024-09-09T10:57:25.603905680Z" level=info msg="ignoring event" container=d2900ca675659c048969b5583fa937c074208babd6254c61f7fde5d2acf2e61a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:25 addons-271785 dockerd[1341]: time="2024-09-09T10:57:25.725879392Z" level=info msg="ignoring event" container=72bf5273b84495c140c234db2fedaad8d856b9063907f62e100f0a6113fb11da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:29 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6bb3e4018f8c08351b178e5ee0762195d5fc804bd9d53055557367216d9ddd4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 09 10:57:30 addons-271785 dockerd[1341]: time="2024-09-09T10:57:30.079371025Z" level=info msg="ignoring event" container=acef1a4f7f0225f5c9023625762d4b94e4f2573b56e82404b54438395a899e5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:30 addons-271785 dockerd[1341]: time="2024-09-09T10:57:30.122610369Z" level=info msg="ignoring event" container=ec9d5179df8546a830110aa12470f6fb2eaa3f556bbdef37921e965154bf3fc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:31 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:31Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 09 10:57:31 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:31Z" level=error msg="error getting RW layer size for container ID 'acef1a4f7f0225f5c9023625762d4b94e4f2573b56e82404b54438395a899e5c': Error response from daemon: No such container: acef1a4f7f0225f5c9023625762d4b94e4f2573b56e82404b54438395a899e5c"
	Sep 09 10:57:31 addons-271785 cri-dockerd[1606]: time="2024-09-09T10:57:31Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'acef1a4f7f0225f5c9023625762d4b94e4f2573b56e82404b54438395a899e5c'"
	Sep 09 10:57:33 addons-271785 dockerd[1341]: time="2024-09-09T10:57:33.874197751Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=339378f2559d5c1e7c29dbd4eacc9e35065c3a0af1a748743bfedb7040d5798f
	Sep 09 10:57:33 addons-271785 dockerd[1341]: time="2024-09-09T10:57:33.938507690Z" level=info msg="ignoring event" container=339378f2559d5c1e7c29dbd4eacc9e35065c3a0af1a748743bfedb7040d5798f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:34 addons-271785 dockerd[1341]: time="2024-09-09T10:57:34.074672709Z" level=info msg="ignoring event" container=3e3df20fb697e4ff3329d775d38727b31f39058cf85f4976bf9a1a7ef424e755 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:38 addons-271785 dockerd[1341]: time="2024-09-09T10:57:38.089542468Z" level=info msg="ignoring event" container=4e0e0fb3807cb35ec5139095e8044e948fca346a498e444472c5387f8694746e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:38 addons-271785 dockerd[1341]: time="2024-09-09T10:57:38.673580269Z" level=info msg="ignoring event" container=eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:38 addons-271785 dockerd[1341]: time="2024-09-09T10:57:38.759994151Z" level=info msg="ignoring event" container=c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:38 addons-271785 dockerd[1341]: time="2024-09-09T10:57:38.839685302Z" level=info msg="ignoring event" container=6500ac5b8a7f6fa13baf598dc8a351a8113f1d4704e6252ea1e316fcb5fe2044 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 09 10:57:38 addons-271785 dockerd[1341]: time="2024-09-09T10:57:38.904705557Z" level=info msg="ignoring event" container=406dcf66bc4785517829a439955bc27e1d3c2c66510a0f339f9f3d63454815ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	896bb364882b3       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  8 seconds ago       Running             hello-world-app           0                   b6bb3e4018f8c       hello-world-app-55bf9c44b4-hrz4s
	a8a5e1c2e8d3d       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                17 seconds ago      Running             nginx                     0                   1cf044f9074b2       nginx
	4099d2673ec9d       a416a98b71e22                                                                                                                44 seconds ago      Exited              helper-pod                0                   e8b268ca19228       helper-pod-delete-pvc-88f0cab2-ac8e-4b40-842d-f0e3d852d155
	246ab1459c224       busybox@sha256:34b191d63fbc93e25e275bfccf1b5365664e5ac28f06d974e8d50090fbb49f41                                              48 seconds ago      Exited              busybox                   0                   0d0a53e91d710       test-local-path
	289cac7194941       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   dbaaf980f78e5       gcp-auth-89d5ffd79-62gbp
	0947080312b3d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   d066a550f31f6       ingress-nginx-admission-patch-bw7kt
	16c89e1623913       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   4b419f266e6c1       ingress-nginx-admission-create-xd2cr
	2874786bcc709       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   a80172553100e       storage-provisioner
	226a97d5587a0       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   139674d0fde52       coredns-6f6b679f8f-xc4cc
	05c87dd74bb34       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   e12e79d589e5f       kube-proxy-2qw8w
	7cb656dbc9404       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   877ce324fcfac       kube-scheduler-addons-271785
	529f303634a5d       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   a940bba3a8707       kube-controller-manager-addons-271785
	acb1801eea7f4       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   209cda1845a20       etcd-addons-271785
	e3d6d9406589c       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   fd98540cc3bfe       kube-apiserver-addons-271785
	
	
	==> coredns [226a97d5587a] <==
	[INFO] 10.244.0.7:47898 - 19112 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101777s
	[INFO] 10.244.0.7:49714 - 38829 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052929s
	[INFO] 10.244.0.7:49714 - 40111 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072918s
	[INFO] 10.244.0.7:56501 - 27099 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004472368s
	[INFO] 10.244.0.7:56501 - 34271 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.008068028s
	[INFO] 10.244.0.7:51555 - 22310 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004923762s
	[INFO] 10.244.0.7:51555 - 58393 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00940149s
	[INFO] 10.244.0.7:49232 - 527 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004244518s
	[INFO] 10.244.0.7:49232 - 25353 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.014181837s
	[INFO] 10.244.0.7:53833 - 30679 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068931s
	[INFO] 10.244.0.7:53833 - 28881 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109867s
	[INFO] 10.244.0.26:47993 - 43472 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000256472s
	[INFO] 10.244.0.26:39436 - 11154 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000336047s
	[INFO] 10.244.0.26:53627 - 63794 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118597s
	[INFO] 10.244.0.26:53410 - 61496 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00018251s
	[INFO] 10.244.0.26:40694 - 30586 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103864s
	[INFO] 10.244.0.26:60823 - 57652 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014825s
	[INFO] 10.244.0.26:33363 - 25628 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006692325s
	[INFO] 10.244.0.26:54746 - 45991 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007446308s
	[INFO] 10.244.0.26:40126 - 32577 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008136972s
	[INFO] 10.244.0.26:50872 - 13500 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009135357s
	[INFO] 10.244.0.26:42237 - 41837 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005136499s
	[INFO] 10.244.0.26:37057 - 43691 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009384324s
	[INFO] 10.244.0.26:50890 - 40458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.001921982s
	[INFO] 10.244.0.26:42884 - 53622 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002026719s
	
	
	==> describe nodes <==
	Name:               addons-271785
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-271785
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf17d6b4040a54caaa170f92a048a513bb2a2b0d
	                    minikube.k8s.io/name=addons-271785
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_09T10_44_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-271785
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Sep 2024 10:44:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-271785
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Sep 2024 10:57:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Sep 2024 10:57:27 +0000   Mon, 09 Sep 2024 10:44:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Sep 2024 10:57:27 +0000   Mon, 09 Sep 2024 10:44:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Sep 2024 10:57:27 +0000   Mon, 09 Sep 2024 10:44:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Sep 2024 10:57:27 +0000   Mon, 09 Sep 2024 10:44:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-271785
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5c0832c58ff4e57a0457258493163ed
	  System UUID:                05d5e461-ab29-4115-89a4-f7fb02cb90b0
	  Boot ID:                    51edb45c-9c14-46a0-b4bd-bdee90b8f8a3
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-hrz4s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  gcp-auth                    gcp-auth-89d5ffd79-62gbp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-xc4cc                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-271785                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-271785             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-271785    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2qw8w                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-271785             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-271785 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-271785 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-271785 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-271785 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-271785 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-271785 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-271785 event: Registered Node addons-271785 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 62 c7 cf 70 b8 08 06
	[Sep 9 10:46] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 37 25 22 c7 71 08 06
	[  +0.333759] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e c9 5a cd 5d 31 08 06
	[  +0.039911] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 4f b0 f1 95 08 06
	[ +10.766152] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 a3 16 d3 b6 1f 08 06
	[  +1.026280] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e a7 e0 5a 81 46 08 06
	[Sep 9 10:47] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff 7a 0b e5 40 23 d9 08 06
	[  +0.109009] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1a 44 87 05 2c 6b 08 06
	[ +28.562989] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 35 ce af 6a 6f 08 06
	[  +0.000445] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 37 0d 65 3f 4f 08 06
	[Sep 9 10:56] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 bb 6d 43 67 4c 08 06
	[  +3.493563] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 2e 1e a8 f0 8f 08 06
	[Sep 9 10:57] IPv4: martian source 10.244.0.38 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 a3 16 d3 b6 1f 08 06
	
	
	==> etcd [acb1801eea7f] <==
	{"level":"info","ts":"2024-09-09T10:44:46.459982Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-09T10:44:47.191754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-09T10:44:47.191791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-09T10:44:47.191829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-09T10:44:47.191854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-09T10:44:47.191864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-09T10:44:47.191871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-09T10:44:47.191881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-09T10:44:47.192818Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-271785 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-09T10:44:47.192899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-09T10:44:47.192969Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T10:44:47.193009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-09T10:44:47.193057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-09T10:44:47.193082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-09T10:44:47.193560Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T10:44:47.193640Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T10:44:47.193671Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-09T10:44:47.194180Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-09T10:44:47.194385Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-09T10:44:47.194891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-09T10:44:47.195157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-09T10:46:13.678895Z","caller":"traceutil/trace.go:171","msg":"trace[1524646620] transaction","detail":"{read_only:false; response_revision:1276; number_of_response:1; }","duration":"103.77411ms","start":"2024-09-09T10:46:13.575101Z","end":"2024-09-09T10:46:13.678875Z","steps":["trace[1524646620] 'process raft request'  (duration: 45.238888ms)","trace[1524646620] 'compare'  (duration: 58.442505ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-09T10:54:47.210330Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1905}
	{"level":"info","ts":"2024-09-09T10:54:47.235294Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1905,"took":"24.37667ms","hash":4020332034,"current-db-size-bytes":9084928,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5066752,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-09T10:54:47.235344Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4020332034,"revision":1905,"compact-revision":-1}
	
	
	==> gcp-auth [289cac719494] <==
	2024/09/09 10:56:32 Ready to write response ...
	2024/09/09 10:56:37 Ready to marshal response ...
	2024/09/09 10:56:37 Ready to write response ...
	2024/09/09 10:56:38 Ready to marshal response ...
	2024/09/09 10:56:38 Ready to write response ...
	2024/09/09 10:56:40 Ready to marshal response ...
	2024/09/09 10:56:40 Ready to write response ...
	2024/09/09 10:56:44 Ready to marshal response ...
	2024/09/09 10:56:44 Ready to write response ...
	2024/09/09 10:56:44 Ready to marshal response ...
	2024/09/09 10:56:44 Ready to write response ...
	2024/09/09 10:56:53 Ready to marshal response ...
	2024/09/09 10:56:53 Ready to write response ...
	2024/09/09 10:56:53 Ready to marshal response ...
	2024/09/09 10:56:53 Ready to write response ...
	2024/09/09 10:56:53 Ready to marshal response ...
	2024/09/09 10:56:53 Ready to write response ...
	2024/09/09 10:56:54 Ready to marshal response ...
	2024/09/09 10:56:54 Ready to write response ...
	2024/09/09 10:57:01 Ready to marshal response ...
	2024/09/09 10:57:01 Ready to write response ...
	2024/09/09 10:57:18 Ready to marshal response ...
	2024/09/09 10:57:18 Ready to write response ...
	2024/09/09 10:57:29 Ready to marshal response ...
	2024/09/09 10:57:29 Ready to write response ...
	
	
	==> kernel <==
	 10:57:39 up 39 min,  0 users,  load average: 1.25, 0.56, 0.33
	Linux addons-271785 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [e3d6d9406589] <==
	W0909 10:48:18.068121       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0909 10:56:35.904225       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58092: use of closed network connection
	E0909 10:56:39.468976       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.31:39238: read: connection reset by peer
	I0909 10:56:42.212038       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0909 10:56:47.206485       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0909 10:56:53.145825       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.198.163"}
	E0909 10:57:10.881533       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0909 10:57:17.126426       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0909 10:57:18.113733       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0909 10:57:18.113779       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0909 10:57:18.150298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0909 10:57:18.150346       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0909 10:57:18.163632       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0909 10:57:18.163695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0909 10:57:18.174606       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0909 10:57:18.175218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0909 10:57:18.175251       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0909 10:57:18.181406       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0909 10:57:18.181439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0909 10:57:18.711000       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0909 10:57:18.910522       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.112.151"}
	W0909 10:57:19.175261       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0909 10:57:19.181984       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0909 10:57:19.253066       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0909 10:57:29.457531       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.227.107"}
	
	
	==> kube-controller-manager [529f303634a5] <==
	W0909 10:57:27.641632       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:27.641684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0909 10:57:27.917658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-271785"
	W0909 10:57:28.668328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:28.668373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0909 10:57:29.269029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.373584ms"
	I0909 10:57:29.272496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.417967ms"
	I0909 10:57:29.272605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="70.492µs"
	I0909 10:57:29.278334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.731µs"
	I0909 10:57:30.852905       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0909 10:57:30.853286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.58µs"
	I0909 10:57:30.856776       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0909 10:57:32.424776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.636277ms"
	I0909 10:57:32.424871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.757µs"
	W0909 10:57:33.638582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:33.638622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0909 10:57:35.945943       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:35.945978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0909 10:57:38.520723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="5.059µs"
	W0909 10:57:38.863916       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:38.863951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0909 10:57:39.418127       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:39.418170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0909 10:57:39.580307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0909 10:57:39.580348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [05c87dd74bb3] <==
	I0909 10:44:58.267657       1 server_linux.go:66] "Using iptables proxy"
	I0909 10:44:58.950692       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0909 10:44:58.950787       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0909 10:44:59.261857       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0909 10:44:59.261924       1 server_linux.go:169] "Using iptables Proxier"
	I0909 10:44:59.266405       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0909 10:44:59.266824       1 server.go:483] "Version info" version="v1.31.0"
	I0909 10:44:59.266844       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0909 10:44:59.269418       1 config.go:197] "Starting service config controller"
	I0909 10:44:59.269434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0909 10:44:59.269455       1 config.go:104] "Starting endpoint slice config controller"
	I0909 10:44:59.269460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0909 10:44:59.270014       1 config.go:326] "Starting node config controller"
	I0909 10:44:59.270023       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0909 10:44:59.370626       1 shared_informer.go:320] Caches are synced for node config
	I0909 10:44:59.370669       1 shared_informer.go:320] Caches are synced for service config
	I0909 10:44:59.370714       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7cb656dbc940] <==
	W0909 10:44:48.263539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0909 10:44:48.264240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.263385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0909 10:44:48.264527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.263329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0909 10:44:48.264708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.265364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0909 10:44:48.265403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0909 10:44:48.265412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0909 10:44:48.265429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.265439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0909 10:44:48.265480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.265528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0909 10:44:48.265550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.265631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0909 10:44:48.265685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:48.265819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0909 10:44:48.265851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:49.121541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0909 10:44:49.121580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:49.199053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0909 10:44:49.199092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0909 10:44:49.449607       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0909 10:44:49.449649       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0909 10:44:52.659681       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 09 10:57:34 addons-271785 kubelet[2438]: I0909 10:57:34.446928    2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"339378f2559d5c1e7c29dbd4eacc9e35065c3a0af1a748743bfedb7040d5798f"} err="failed to get container status \"339378f2559d5c1e7c29dbd4eacc9e35065c3a0af1a748743bfedb7040d5798f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 339378f2559d5c1e7c29dbd4eacc9e35065c3a0af1a748743bfedb7040d5798f"
	Sep 09 10:57:34 addons-271785 kubelet[2438]: E0909 10:57:34.663626    2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="28571f1b-06fe-4073-b6ac-9fe7c2f086c3"
	Sep 09 10:57:34 addons-271785 kubelet[2438]: I0909 10:57:34.669013    2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89539b6c-c702-464d-a8db-4259f5518958" path="/var/lib/kubelet/pods/89539b6c-c702-464d-a8db-4259f5518958/volumes"
	Sep 09 10:57:36 addons-271785 kubelet[2438]: E0909 10:57:36.663454    2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="7b2b4c60-0540-4fa9-9b2c-1af08e001fcb"
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.302592    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxz5l\" (UniqueName: \"kubernetes.io/projected/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-kube-api-access-nxz5l\") pod \"7b2b4c60-0540-4fa9-9b2c-1af08e001fcb\" (UID: \"7b2b4c60-0540-4fa9-9b2c-1af08e001fcb\") "
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.302639    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-gcp-creds\") pod \"7b2b4c60-0540-4fa9-9b2c-1af08e001fcb\" (UID: \"7b2b4c60-0540-4fa9-9b2c-1af08e001fcb\") "
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.302708    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7b2b4c60-0540-4fa9-9b2c-1af08e001fcb" (UID: "7b2b4c60-0540-4fa9-9b2c-1af08e001fcb"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.304366    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-kube-api-access-nxz5l" (OuterVolumeSpecName: "kube-api-access-nxz5l") pod "7b2b4c60-0540-4fa9-9b2c-1af08e001fcb" (UID: "7b2b4c60-0540-4fa9-9b2c-1af08e001fcb"). InnerVolumeSpecName "kube-api-access-nxz5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.403579    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nxz5l\" (UniqueName: \"kubernetes.io/projected/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-kube-api-access-nxz5l\") on node \"addons-271785\" DevicePath \"\""
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.403612    2438 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb-gcp-creds\") on node \"addons-271785\" DevicePath \"\""
	Sep 09 10:57:38 addons-271785 kubelet[2438]: I0909 10:57:38.671782    2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2b4c60-0540-4fa9-9b2c-1af08e001fcb" path="/var/lib/kubelet/pods/7b2b4c60-0540-4fa9-9b2c-1af08e001fcb/volumes"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.051486    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcwjc\" (UniqueName: \"kubernetes.io/projected/19051672-048d-4f1c-8814-35c5fa1de42e-kube-api-access-hcwjc\") pod \"19051672-048d-4f1c-8814-35c5fa1de42e\" (UID: \"19051672-048d-4f1c-8814-35c5fa1de42e\") "
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.051549    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skz2j\" (UniqueName: \"kubernetes.io/projected/ad21aca5-affd-4e4b-9d2e-487316ad11de-kube-api-access-skz2j\") pod \"ad21aca5-affd-4e4b-9d2e-487316ad11de\" (UID: \"ad21aca5-affd-4e4b-9d2e-487316ad11de\") "
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.053618    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19051672-048d-4f1c-8814-35c5fa1de42e-kube-api-access-hcwjc" (OuterVolumeSpecName: "kube-api-access-hcwjc") pod "19051672-048d-4f1c-8814-35c5fa1de42e" (UID: "19051672-048d-4f1c-8814-35c5fa1de42e"). InnerVolumeSpecName "kube-api-access-hcwjc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.053732    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad21aca5-affd-4e4b-9d2e-487316ad11de-kube-api-access-skz2j" (OuterVolumeSpecName: "kube-api-access-skz2j") pod "ad21aca5-affd-4e4b-9d2e-487316ad11de" (UID: "ad21aca5-affd-4e4b-9d2e-487316ad11de"). InnerVolumeSpecName "kube-api-access-skz2j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.152109    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-skz2j\" (UniqueName: \"kubernetes.io/projected/ad21aca5-affd-4e4b-9d2e-487316ad11de-kube-api-access-skz2j\") on node \"addons-271785\" DevicePath \"\""
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.152147    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hcwjc\" (UniqueName: \"kubernetes.io/projected/19051672-048d-4f1c-8814-35c5fa1de42e-kube-api-access-hcwjc\") on node \"addons-271785\" DevicePath \"\""
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.493816    2438 scope.go:117] "RemoveContainer" containerID="c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.508637    2438 scope.go:117] "RemoveContainer" containerID="c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: E0909 10:57:39.509423    2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d" containerID="c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.509478    2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d"} err="failed to get container status \"c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d\": rpc error: code = Unknown desc = Error response from daemon: No such container: c21bf18dbff2c4c0727e1617a3d4f6f6e7152448111e048ab49811a590c3001d"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.509506    2438 scope.go:117] "RemoveContainer" containerID="eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.525696    2438 scope.go:117] "RemoveContainer" containerID="eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: E0909 10:57:39.526388    2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec" containerID="eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec"
	Sep 09 10:57:39 addons-271785 kubelet[2438]: I0909 10:57:39.526422    2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec"} err="failed to get container status \"eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec\": rpc error: code = Unknown desc = Error response from daemon: No such container: eb3666654387c563813b8183daaf520231a6e980722f06dbffe97190cf3d17ec"
	
	
	==> storage-provisioner [2874786bcc70] <==
	I0909 10:45:04.860372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0909 10:45:04.871764       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0909 10:45:04.871806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0909 10:45:04.956675       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0909 10:45:04.957143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-271785_a967797c-77a3-4845-9911-406176895cc0!
	I0909 10:45:04.957137       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82b53c78-dea4-46b6-be27-0396c43ccd5d", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-271785_a967797c-77a3-4845-9911-406176895cc0 became leader
	I0909 10:45:05.058110       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-271785_a967797c-77a3-4845-9911-406176895cc0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-271785 -n addons-271785
helpers_test.go:261: (dbg) Run:  kubectl --context addons-271785 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-271785 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-271785 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-271785/192.168.49.2
	Start Time:       Mon, 09 Sep 2024 10:48:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pr9fv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pr9fv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m15s                   default-scheduler  Successfully assigned default/busybox to addons-271785
	  Normal   Pulling    7m44s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (222.399825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.22s)

                                                
                                    

Test pass (321/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 11.66
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.18
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.73
22 TestOffline 74.23
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 209.34
29 TestAddons/serial/Volcano 40.51
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 19.51
35 TestAddons/parallel/InspektorGadget 12.01
36 TestAddons/parallel/MetricsServer 5.68
37 TestAddons/parallel/HelmTiller 13.87
39 TestAddons/parallel/CSI 50.61
40 TestAddons/parallel/Headlamp 18.23
41 TestAddons/parallel/CloudSpanner 5.41
42 TestAddons/parallel/LocalPath 54.04
43 TestAddons/parallel/NvidiaDevicePlugin 5.39
44 TestAddons/parallel/Yakd 10.57
45 TestAddons/StoppedEnableDisable 11.06
46 TestCertOptions 32.38
47 TestCertExpiration 231.58
48 TestDockerFlags 32.31
49 TestForceSystemdFlag 35.12
50 TestForceSystemdEnv 37.67
52 TestKVMDriverInstallOrUpdate 5.08
56 TestErrorSpam/setup 20.1
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.1
60 TestErrorSpam/unpause 1.45
61 TestErrorSpam/stop 10.85
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 59.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.37
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
73 TestFunctional/serial/CacheCmd/cache/add_local 1.41
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.22
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 39.55
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.96
84 TestFunctional/serial/LogsFileCmd 0.99
85 TestFunctional/serial/InvalidService 4.02
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 13.73
89 TestFunctional/parallel/DryRun 0.38
90 TestFunctional/parallel/InternationalLanguage 0.17
91 TestFunctional/parallel/StatusCmd 1.01
95 TestFunctional/parallel/ServiceCmdConnect 9.65
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 38.61
99 TestFunctional/parallel/SSHCmd 0.57
100 TestFunctional/parallel/CpCmd 1.88
101 TestFunctional/parallel/MySQL 27.21
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 1.5
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
112 TestFunctional/parallel/Version/short 0.07
113 TestFunctional/parallel/Version/components 0.92
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.27
119 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
120 TestFunctional/parallel/ServiceCmd/List 0.87
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.93
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
123 TestFunctional/parallel/ServiceCmd/Format 0.6
124 TestFunctional/parallel/ServiceCmd/URL 0.79
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
126 TestFunctional/parallel/ProfileCmd/profile_list 0.46
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/MountCmd/any-port 8.21
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
142 TestFunctional/parallel/ImageCommands/ImageBuild 4.13
143 TestFunctional/parallel/ImageCommands/Setup 1.91
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.87
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.75
146 TestFunctional/parallel/DockerEnv/bash 0.88
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
152 TestFunctional/parallel/MountCmd/specific-port 1.98
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 100.36
161 TestMultiControlPlane/serial/DeployApp 6.35
162 TestMultiControlPlane/serial/PingHostFromPods 1.03
163 TestMultiControlPlane/serial/AddWorkerNode 20.16
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.62
166 TestMultiControlPlane/serial/CopyFile 15.29
167 TestMultiControlPlane/serial/StopSecondaryNode 11.36
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
169 TestMultiControlPlane/serial/RestartSecondaryNode 39.48
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.8
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 218.16
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.26
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
174 TestMultiControlPlane/serial/StopCluster 32.55
175 TestMultiControlPlane/serial/RestartCluster 48.13
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
177 TestMultiControlPlane/serial/AddSecondaryNode 54.97
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.64
181 TestImageBuild/serial/Setup 24.51
182 TestImageBuild/serial/NormalBuild 2.39
183 TestImageBuild/serial/BuildWithBuildArg 0.93
184 TestImageBuild/serial/BuildWithDockerIgnore 0.73
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
189 TestJSONOutput/start/Command 59.63
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.52
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.43
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.73
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 25.27
215 TestKicCustomNetwork/use_default_bridge_network 22.36
216 TestKicExistingNetwork 22.01
217 TestKicCustomSubnet 22.41
218 TestKicStaticIP 22.2
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 47.74
223 TestMountStart/serial/StartWithMountFirst 6.87
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 6.92
226 TestMountStart/serial/VerifyMountSecond 0.23
227 TestMountStart/serial/DeleteFirst 1.44
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.16
230 TestMountStart/serial/RestartStopped 8.39
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 56.9
235 TestMultiNode/serial/DeployApp2Nodes 49.32
236 TestMultiNode/serial/PingHostFrom2Pods 0.76
237 TestMultiNode/serial/AddNode 15.56
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.29
240 TestMultiNode/serial/CopyFile 8.54
241 TestMultiNode/serial/StopNode 2.08
242 TestMultiNode/serial/StartAfterStop 9.56
243 TestMultiNode/serial/RestartKeepsNodes 103.11
244 TestMultiNode/serial/DeleteNode 5.11
245 TestMultiNode/serial/StopMultiNode 21.32
246 TestMultiNode/serial/RestartMultiNode 52.74
247 TestMultiNode/serial/ValidateNameConflict 22.59
252 TestPreload 118.11
254 TestScheduledStopUnix 96.38
255 TestSkaffold 101.72
257 TestInsufficientStorage 9.84
258 TestRunningBinaryUpgrade 153.22
260 TestKubernetesUpgrade 173.56
261 TestMissingContainerUpgrade 107.41
263 TestStoppedBinaryUpgrade/Setup 2.5
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
272 TestNoKubernetes/serial/StartWithK8s 26.75
273 TestStoppedBinaryUpgrade/Upgrade 152.12
274 TestNoKubernetes/serial/StartWithStopK8s 6.95
275 TestNoKubernetes/serial/Start 6.31
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
277 TestNoKubernetes/serial/ProfileList 0.84
278 TestNoKubernetes/serial/Stop 1.16
279 TestNoKubernetes/serial/StartNoArgs 12.14
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
282 TestPause/serial/Start 34.62
283 TestPause/serial/SecondStartNoReconfiguration 33
284 TestPause/serial/Pause 0.53
285 TestPause/serial/VerifyStatus 0.27
286 TestPause/serial/Unpause 0.45
287 TestPause/serial/PauseAgain 0.58
288 TestPause/serial/DeletePaused 2.1
289 TestPause/serial/VerifyDeletedResources 18.21
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
303 TestStartStop/group/old-k8s-version/serial/FirstStart 109.86
305 TestStartStop/group/no-preload/serial/FirstStart 66.59
306 TestStartStop/group/no-preload/serial/DeployApp 9.24
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
308 TestStartStop/group/no-preload/serial/Stop 10.7
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/no-preload/serial/SecondStart 262.88
311 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
313 TestStartStop/group/old-k8s-version/serial/Stop 10.87
315 TestStartStop/group/embed-certs/serial/FirstStart 72.12
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
317 TestStartStop/group/old-k8s-version/serial/SecondStart 126.44
318 TestStartStop/group/embed-certs/serial/DeployApp 9.29
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.49
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.03
322 TestStartStop/group/embed-certs/serial/Stop 10.84
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
324 TestStartStop/group/embed-certs/serial/SecondStart 264.02
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.82
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.69
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 306.8
331 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
332 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
333 TestStartStop/group/old-k8s-version/serial/Pause 2.7
335 TestStartStop/group/newest-cni/serial/FirstStart 28.94
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
338 TestStartStop/group/newest-cni/serial/Stop 10.74
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 13.95
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
344 TestStartStop/group/newest-cni/serial/Pause 2.39
345 TestNetworkPlugins/group/auto/Start 67.19
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
349 TestStartStop/group/no-preload/serial/Pause 2.28
350 TestNetworkPlugins/group/kindnet/Start 56.07
351 TestNetworkPlugins/group/auto/KubeletFlags 0.28
352 TestNetworkPlugins/group/auto/NetCatPod 10.21
353 TestNetworkPlugins/group/auto/DNS 0.15
354 TestNetworkPlugins/group/auto/Localhost 0.12
355 TestNetworkPlugins/group/auto/HairPin 0.11
356 TestNetworkPlugins/group/calico/Start 35.74
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
360 TestNetworkPlugins/group/kindnet/DNS 0.14
361 TestNetworkPlugins/group/kindnet/Localhost 0.14
362 TestNetworkPlugins/group/kindnet/HairPin 0.11
363 TestNetworkPlugins/group/calico/ControllerPod 19.01
364 TestNetworkPlugins/group/custom-flannel/Start 43.65
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
366 TestNetworkPlugins/group/calico/KubeletFlags 0.28
367 TestNetworkPlugins/group/calico/NetCatPod 10.18
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
370 TestStartStop/group/embed-certs/serial/Pause 2.49
371 TestNetworkPlugins/group/calico/DNS 0.15
372 TestNetworkPlugins/group/calico/Localhost 0.14
373 TestNetworkPlugins/group/calico/HairPin 0.13
374 TestNetworkPlugins/group/false/Start 71.38
375 TestNetworkPlugins/group/enable-default-cni/Start 64.85
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.69
378 TestNetworkPlugins/group/custom-flannel/DNS 0.13
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
381 TestNetworkPlugins/group/flannel/Start 43.74
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
383 TestNetworkPlugins/group/false/KubeletFlags 0.28
384 TestNetworkPlugins/group/false/NetCatPod 9.19
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.41
388 TestNetworkPlugins/group/false/DNS 0.23
389 TestNetworkPlugins/group/false/Localhost 0.2
390 TestNetworkPlugins/group/false/HairPin 0.14
391 TestNetworkPlugins/group/bridge/Start 65.51
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.19
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/kubenet/Start 40.35
399 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
400 TestNetworkPlugins/group/flannel/NetCatPod 12.22
401 TestNetworkPlugins/group/flannel/DNS 0.14
402 TestNetworkPlugins/group/flannel/Localhost 0.11
403 TestNetworkPlugins/group/flannel/HairPin 0.13
404 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
405 TestNetworkPlugins/group/kubenet/NetCatPod 9.16
406 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
407 TestNetworkPlugins/group/bridge/NetCatPod 9.19
408 TestNetworkPlugins/group/kubenet/DNS 0.14
409 TestNetworkPlugins/group/kubenet/Localhost 0.13
410 TestNetworkPlugins/group/kubenet/HairPin 0.11
411 TestNetworkPlugins/group/bridge/DNS 0.14
412 TestNetworkPlugins/group/bridge/Localhost 0.11
413 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (17.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-147133 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-147133 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.035354329s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-147133
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-147133: exit status 85 (57.39114ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-147133 | jenkins | v1.34.0 | 09 Sep 24 10:43 UTC |          |
	|         | -p download-only-147133        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 10:43:44
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 10:43:44.117452   15416 out.go:345] Setting OutFile to fd 1 ...
	I0909 10:43:44.117591   15416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:43:44.117602   15416 out.go:358] Setting ErrFile to fd 2...
	I0909 10:43:44.117609   15416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:43:44.117786   15416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	W0909 10:43:44.117928   15416 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19584-8635/.minikube/config/config.json: open /home/jenkins/minikube-integration/19584-8635/.minikube/config/config.json: no such file or directory
	I0909 10:43:44.118543   15416 out.go:352] Setting JSON to true
	I0909 10:43:44.119416   15416 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1557,"bootTime":1725877067,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0909 10:43:44.119495   15416 start.go:139] virtualization: kvm guest
	I0909 10:43:44.121783   15416 out.go:97] [download-only-147133] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0909 10:43:44.121887   15416 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball: no such file or directory
	I0909 10:43:44.121974   15416 notify.go:220] Checking for updates...
	I0909 10:43:44.123166   15416 out.go:169] MINIKUBE_LOCATION=19584
	I0909 10:43:44.124631   15416 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 10:43:44.125969   15416 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 10:43:44.127194   15416 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	I0909 10:43:44.128280   15416 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0909 10:43:44.130520   15416 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0909 10:43:44.130726   15416 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 10:43:44.151752   15416 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 10:43:44.151858   15416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:43:44.511000   15416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-09 10:43:44.50241437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:43:44.511106   15416 docker.go:307] overlay module found
	I0909 10:43:44.512621   15416 out.go:97] Using the docker driver based on user configuration
	I0909 10:43:44.512648   15416 start.go:297] selected driver: docker
	I0909 10:43:44.512656   15416 start.go:901] validating driver "docker" against <nil>
	I0909 10:43:44.512729   15416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:43:44.556512   15416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-09 10:43:44.548363052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:43:44.556727   15416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 10:43:44.557215   15416 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0909 10:43:44.557381   15416 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0909 10:43:44.559041   15416 out.go:169] Using Docker driver with root privileges
	I0909 10:43:44.560238   15416 cni.go:84] Creating CNI manager for ""
	I0909 10:43:44.560260   15416 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0909 10:43:44.560323   15416 start.go:340] cluster config:
	{Name:download-only-147133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-147133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 10:43:44.561713   15416 out.go:97] Starting "download-only-147133" primary control-plane node in "download-only-147133" cluster
	I0909 10:43:44.561733   15416 cache.go:121] Beginning downloading kic base image for docker with docker
	I0909 10:43:44.563455   15416 out.go:97] Pulling base image v0.0.45 ...
	I0909 10:43:44.563482   15416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0909 10:43:44.563585   15416 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 10:43:44.579287   15416 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 10:43:44.579467   15416 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 10:43:44.579566   15416 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 10:43:44.738494   15416 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0909 10:43:44.738517   15416 cache.go:56] Caching tarball of preloaded images
	I0909 10:43:44.738655   15416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0909 10:43:44.740510   15416 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0909 10:43:44.740523   15416 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0909 10:43:44.847094   15416 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-147133 host does not exist
	  To start a cluster, run: "minikube start -p download-only-147133"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-147133
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (11.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-504937 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-504937 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.65993413s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (11.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-504937
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-504937: exit status 85 (56.473031ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-147133 | jenkins | v1.34.0 | 09 Sep 24 10:43 UTC |                     |
	|         | -p download-only-147133        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC | 09 Sep 24 10:44 UTC |
	| delete  | -p download-only-147133        | download-only-147133 | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC | 09 Sep 24 10:44 UTC |
	| start   | -o=json --download-only        | download-only-504937 | jenkins | v1.34.0 | 09 Sep 24 10:44 UTC |                     |
	|         | -p download-only-504937        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/09 10:44:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0909 10:44:01.526458   15812 out.go:345] Setting OutFile to fd 1 ...
	I0909 10:44:01.526969   15812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:44:01.526990   15812 out.go:358] Setting ErrFile to fd 2...
	I0909 10:44:01.526998   15812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 10:44:01.527441   15812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 10:44:01.528466   15812 out.go:352] Setting JSON to true
	I0909 10:44:01.529271   15812 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1574,"bootTime":1725877067,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0909 10:44:01.529330   15812 start.go:139] virtualization: kvm guest
	I0909 10:44:01.531841   15812 out.go:97] [download-only-504937] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0909 10:44:01.532000   15812 notify.go:220] Checking for updates...
	I0909 10:44:01.533398   15812 out.go:169] MINIKUBE_LOCATION=19584
	I0909 10:44:01.534892   15812 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 10:44:01.536262   15812 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 10:44:01.537721   15812 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	I0909 10:44:01.539094   15812 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0909 10:44:01.542345   15812 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0909 10:44:01.542567   15812 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 10:44:01.564024   15812 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 10:44:01.564098   15812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:44:01.608999   15812 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 10:44:01.599877456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:44:01.609111   15812 docker.go:307] overlay module found
	I0909 10:44:01.610867   15812 out.go:97] Using the docker driver based on user configuration
	I0909 10:44:01.610898   15812 start.go:297] selected driver: docker
	I0909 10:44:01.610907   15812 start.go:901] validating driver "docker" against <nil>
	I0909 10:44:01.610991   15812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 10:44:01.655172   15812 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-09 10:44:01.647088689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 10:44:01.655374   15812 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0909 10:44:01.655830   15812 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0909 10:44:01.655991   15812 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0909 10:44:01.657884   15812 out.go:169] Using Docker driver with root privileges
	I0909 10:44:01.659089   15812 cni.go:84] Creating CNI manager for ""
	I0909 10:44:01.659114   15812 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0909 10:44:01.659133   15812 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0909 10:44:01.659200   15812 start.go:340] cluster config:
	{Name:download-only-504937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-504937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 10:44:01.660679   15812 out.go:97] Starting "download-only-504937" primary control-plane node in "download-only-504937" cluster
	I0909 10:44:01.660699   15812 cache.go:121] Beginning downloading kic base image for docker with docker
	I0909 10:44:01.662075   15812 out.go:97] Pulling base image v0.0.45 ...
	I0909 10:44:01.662096   15812 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0909 10:44:01.662194   15812 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0909 10:44:01.678672   15812 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0909 10:44:01.678787   15812 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0909 10:44:01.678805   15812 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0909 10:44:01.678812   15812 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0909 10:44:01.678821   15812 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0909 10:44:02.141760   15812 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0909 10:44:02.141810   15812 cache.go:56] Caching tarball of preloaded images
	I0909 10:44:02.141975   15812 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0909 10:44:02.143841   15812 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0909 10:44:02.143861   15812 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0909 10:44:02.247985   15812 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /home/jenkins/minikube-integration/19584-8635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-504937 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504937"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-504937
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-180363 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-180363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-180363
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-876583 --alsologtostderr --binary-mirror http://127.0.0.1:43487 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-876583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-876583
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (74.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-955031 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-955031 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m12.09031966s)
helpers_test.go:175: Cleaning up "offline-docker-955031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-955031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-955031: (2.135926627s)
--- PASS: TestOffline (74.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-271785
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-271785: exit status 85 (45.211878ms)

                                                
                                                
-- stdout --
	* Profile "addons-271785" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-271785"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-271785
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-271785: exit status 85 (47.197684ms)

                                                
                                                
-- stdout --
	* Profile "addons-271785" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-271785"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-271785 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-271785 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m29.341785115s)
--- PASS: TestAddons/Setup (209.34s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 13.350745ms
addons_test.go:905: volcano-admission stabilized in 13.429308ms
addons_test.go:897: volcano-scheduler stabilized in 13.46217ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-9qmt5" [9a884302-0c1d-4cce-ae20-52ef74e5103e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003667693s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-vzj2x" [b84b26c0-da2b-402b-b4ea-bfd23990486b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003500463s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-pt7cc" [0edf6005-197c-4d65-bce9-fceb98b3f1b3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003026208s
addons_test.go:932: (dbg) Run:  kubectl --context addons-271785 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-271785 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-271785 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e09c3d43-de5b-45b6-8e08-f5d2366573d0] Pending
helpers_test.go:344: "test-job-nginx-0" [e09c3d43-de5b-45b6-8e08-f5d2366573d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e09c3d43-de5b-45b6-8e08-f5d2366573d0] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003667413s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable volcano --alsologtostderr -v=1: (10.160924459s)
--- PASS: TestAddons/serial/Volcano (40.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-271785 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-271785 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-271785 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-271785 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-271785 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3fcb828a-d7c2-4b70-8e2e-0877e80f6f92] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3fcb828a-d7c2-4b70-8e2e-0877e80f6f92] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00326195s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-271785 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable ingress --alsologtostderr -v=1: (7.542121268s)
--- PASS: TestAddons/parallel/Ingress (19.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qzqq9" [227869d2-13c5-4b55-b371-aed16e3b734e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003406682s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-271785
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-271785: (6.010304782s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.237621ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jmhrl" [b5e8b788-3dd6-4f12-ad84-911eedfe943d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003205407s
addons_test.go:417: (dbg) Run:  kubectl --context addons-271785 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.87s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.5595ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-jsllm" [e269fd50-e1f6-4fe0-a330-8302e46d81af] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003231756s
addons_test.go:475: (dbg) Run:  kubectl --context addons-271785 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-271785 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.230487115s)
addons_test.go:480: kubectl --context addons-271785 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-271785 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-271785 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.635132168s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.352046ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-271785 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-271785 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b9d18ad8-1f19-44ea-88d2-790b088c2b21] Pending
helpers_test.go:344: "task-pv-pod" [b9d18ad8-1f19-44ea-88d2-790b088c2b21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b9d18ad8-1f19-44ea-88d2-790b088c2b21] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003957078s
addons_test.go:590: (dbg) Run:  kubectl --context addons-271785 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-271785 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-271785 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-271785 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-271785 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-271785 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-271785 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77518749-2426-4a6a-9856-ec5aea836613] Pending
helpers_test.go:344: "task-pv-pod-restore" [77518749-2426-4a6a-9856-ec5aea836613] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77518749-2426-4a6a-9856-ec5aea836613] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00390008s
addons_test.go:632: (dbg) Run:  kubectl --context addons-271785 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-271785 delete pod task-pv-pod-restore: (1.287421591s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-271785 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-271785 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.474194289s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-271785 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-7dl58" [0970e43a-4430-4f14-8051-91e6901072c1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-7dl58" [0970e43a-4430-4f14-8051-91e6901072c1] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003289473s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable headlamp --alsologtostderr -v=1: (5.546409108s)
--- PASS: TestAddons/parallel/Headlamp (18.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-c2rnh" [af834e1e-0cb0-4b7d-a17e-7bdfa914ff02] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003025064s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-271785
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-271785 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-271785 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [53fc8626-3b64-4944-9ff0-3636b46dde94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [53fc8626-3b64-4944-9ff0-3636b46dde94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [53fc8626-3b64-4944-9ff0-3636b46dde94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003647276s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-271785 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 ssh "cat /opt/local-path-provisioner/pvc-88f0cab2-ac8e-4b40-842d-f0e3d852d155_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-271785 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-271785 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.165555469s)
--- PASS: TestAddons/parallel/LocalPath (54.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tdngv" [e84a8fea-2be6-41b4-a429-3b434f6fcb8a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004210591s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-271785
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-h4wt6" [22782243-f9d4-46a7-9d91-dfdfc00352a8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004205037s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-271785 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-271785 addons disable yakd --alsologtostderr -v=1: (5.564896803s)
--- PASS: TestAddons/parallel/Yakd (10.57s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-271785
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-271785: (10.833797662s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-271785
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-271785
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-271785
--- PASS: TestAddons/StoppedEnableDisable (11.06s)

                                                
                                    
x
+
TestCertOptions (32.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-609276 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-609276 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.279628901s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-609276 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-609276 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-609276 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-609276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-609276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-609276: (4.483046193s)
--- PASS: TestCertOptions (32.38s)

                                                
                                    
x
+
TestCertExpiration (231.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-222088 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-222088 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (29.301742097s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-222088 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-222088 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.107204653s)
helpers_test.go:175: Cleaning up "cert-expiration-222088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-222088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-222088: (2.165457426s)
--- PASS: TestCertExpiration (231.58s)

                                                
                                    
x
+
TestDockerFlags (32.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-195308 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-195308 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.57143385s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-195308 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-195308 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-195308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-195308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-195308: (2.135249083s)
--- PASS: TestDockerFlags (32.31s)

                                                
                                    
x
+
TestForceSystemdFlag (35.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-383629 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-383629 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.103909383s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-383629 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-383629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-383629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-383629: (3.685481406s)
--- PASS: TestForceSystemdFlag (35.12s)

                                                
                                    
x
+
TestForceSystemdEnv (37.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-174896 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-174896 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.066336119s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-174896 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-174896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-174896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-174896: (2.260504583s)
--- PASS: TestForceSystemdEnv (37.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                    
x
+
TestErrorSpam/setup (20.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-026652 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026652 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-026652 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026652 --driver=docker  --container-runtime=docker: (20.101746838s)
--- PASS: TestErrorSpam/setup (20.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 pause
--- PASS: TestErrorSpam/pause (1.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (10.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 stop: (10.678780231s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026652 --log_dir /tmp/nospam-026652 stop
--- PASS: TestErrorSpam/stop (10.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19584-8635/.minikube/files/etc/test/nested/copy/15404/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-598740 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (59.814685599s)
--- PASS: TestFunctional/serial/StartWithProxy (59.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-598740 --alsologtostderr -v=8: (37.364440383s)
functional_test.go:663: soft start took 37.365022766s for "functional-598740" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-598740 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-598740 /tmp/TestFunctionalserialCacheCmdcacheadd_local2729962631/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache add minikube-local-cache-test:functional-598740
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-598740 cache add minikube-local-cache-test:functional-598740: (1.09020069s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache delete minikube-local-cache-test:functional-598740
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-598740
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.684843ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 kubectl -- --context functional-598740 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-598740 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-598740 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.554123148s)
functional_test.go:761: restart took 39.554351034s for "functional-598740" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-598740 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 logs
--- PASS: TestFunctional/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 logs --file /tmp/TestFunctionalserialLogsFileCmd3095467113/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-598740 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-598740
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-598740: exit status 115 (319.400244ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32397 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-598740 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 config get cpus: exit status 14 (70.349385ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 config get cpus: exit status 14 (47.099503ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-598740 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-598740 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 72359: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-598740 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (151.803965ms)

                                                
                                                
-- stdout --
	* [functional-598740] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:01:27.563737   70277 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:01:27.563859   70277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:01:27.563868   70277 out.go:358] Setting ErrFile to fd 2...
	I0909 11:01:27.563872   70277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:01:27.564034   70277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:01:27.564609   70277 out.go:352] Setting JSON to false
	I0909 11:01:27.565767   70277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2621,"bootTime":1725877067,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0909 11:01:27.565862   70277 start.go:139] virtualization: kvm guest
	I0909 11:01:27.568070   70277 out.go:177] * [functional-598740] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0909 11:01:27.569855   70277 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 11:01:27.569853   70277 notify.go:220] Checking for updates...
	I0909 11:01:27.572689   70277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:01:27.573958   70277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 11:01:27.575125   70277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	I0909 11:01:27.576380   70277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0909 11:01:27.577695   70277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 11:01:27.579248   70277 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:01:27.579759   70277 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:01:27.602221   70277 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:01:27.602391   70277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:01:27.660907   70277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:58 SystemTime:2024-09-09 11:01:27.649307842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 11:01:27.661040   70277 docker.go:307] overlay module found
	I0909 11:01:27.663846   70277 out.go:177] * Using the docker driver based on existing profile
	I0909 11:01:27.665345   70277 start.go:297] selected driver: docker
	I0909 11:01:27.665366   70277 start.go:901] validating driver "docker" against &{Name:functional-598740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-598740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:01:27.665498   70277 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 11:01:27.667991   70277 out.go:201] 
	W0909 11:01:27.669415   70277 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0909 11:01:27.671116   70277 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598740 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-598740 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.943779ms)

                                                
                                                
-- stdout --
	* [functional-598740] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:01:30.558388   72089 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:01:30.558524   72089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:01:30.558533   72089 out.go:358] Setting ErrFile to fd 2...
	I0909 11:01:30.558537   72089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:01:30.558829   72089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:01:30.559386   72089 out.go:352] Setting JSON to false
	I0909 11:01:30.560536   72089 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2624,"bootTime":1725877067,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0909 11:01:30.560634   72089 start.go:139] virtualization: kvm guest
	I0909 11:01:30.563304   72089 out.go:177] * [functional-598740] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0909 11:01:30.565137   72089 out.go:177]   - MINIKUBE_LOCATION=19584
	I0909 11:01:30.565166   72089 notify.go:220] Checking for updates...
	I0909 11:01:30.568500   72089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0909 11:01:30.569736   72089 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	I0909 11:01:30.571023   72089 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	I0909 11:01:30.572359   72089 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0909 11:01:30.573612   72089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0909 11:01:30.575313   72089 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:01:30.575771   72089 driver.go:394] Setting default libvirt URI to qemu:///system
	I0909 11:01:30.599745   72089 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0909 11:01:30.599832   72089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:01:30.662773   72089 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-09 11:01:30.648783903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 11:01:30.662897   72089 docker.go:307] overlay module found
	I0909 11:01:30.664561   72089 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0909 11:01:30.665736   72089 start.go:297] selected driver: docker
	I0909 11:01:30.665754   72089 start.go:901] validating driver "docker" against &{Name:functional-598740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-598740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0909 11:01:30.665882   72089 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0909 11:01:30.668158   72089 out.go:201] 
	W0909 11:01:30.669128   72089 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0909 11:01:30.670328   72089 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-598740 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-598740 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-25mm8" [086beefe-bbeb-4883-9a4b-316c0af5342c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-25mm8" [086beefe-bbeb-4883-9a4b-316c0af5342c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.011679962s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31733
functional_test.go:1675: http://192.168.49.2:31733: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-25mm8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31733
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6dc0f838-cd66-4c74-b835-d5dc142df1b8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00433847s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-598740 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-598740 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-598740 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-598740 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d5a61d7e-df81-4036-80d2-fafbd4bf9802] Pending
helpers_test.go:344: "sp-pod" [d5a61d7e-df81-4036-80d2-fafbd4bf9802] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d5a61d7e-df81-4036-80d2-fafbd4bf9802] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003527508s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-598740 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-598740 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-598740 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [980fabd5-6d3d-41c7-8551-45beb163fb07] Pending
helpers_test.go:344: "sp-pod" [980fabd5-6d3d-41c7-8551-45beb163fb07] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [980fabd5-6d3d-41c7-8551-45beb163fb07] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003903692s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-598740 exec sp-pod -- ls /tmp/mount
2024/09/09 11:01:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh -n functional-598740 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cp functional-598740:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3938142950/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh -n functional-598740 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh -n functional-598740 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-598740 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gzsrg" [f8d2951a-8ecf-47c1-86a8-14dc8526756a] Pending
helpers_test.go:344: "mysql-6cdb49bbb-gzsrg" [f8d2951a-8ecf-47c1-86a8-14dc8526756a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gzsrg" [f8d2951a-8ecf-47c1-86a8-14dc8526756a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003650394s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;": exit status 1 (223.024782ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;": exit status 1 (197.497161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;": exit status 1 (108.571446ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-598740 exec mysql-6cdb49bbb-gzsrg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15404/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /etc/test/nested/copy/15404/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15404.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /etc/ssl/certs/15404.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15404.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /usr/share/ca-certificates/15404.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/154042.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /etc/ssl/certs/154042.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/154042.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /usr/share/ca-certificates/154042.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-598740 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh "sudo systemctl is-active crio": exit status 1 (293.991008ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 65911: os: process already finished
helpers_test.go:502: unable to terminate pid 65639: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-598740 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [626d5950-1418-4cdd-8949-316eef9c7124] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [626d5950-1418-4cdd-8949-316eef9c7124] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.003824453s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-598740 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-598740 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-5bqtj" [ed2a2b4c-f65f-44d1-97e3-ad28ade92dc7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-5bqtj" [ed2a2b4c-f65f-44d1-97e3-ad28ade92dc7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003858948s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service list -o json
functional_test.go:1494: Took "931.696032ms" to run "out/minikube-linux-amd64 -p functional-598740 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31499
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31499
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "400.301924ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "60.253565ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "341.45935ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "63.317531ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-598740 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.81.199 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-598740 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdany-port847500864/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725879683072840504" to /tmp/TestFunctionalparallelMountCmdany-port847500864/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725879683072840504" to /tmp/TestFunctionalparallelMountCmdany-port847500864/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725879683072840504" to /tmp/TestFunctionalparallelMountCmdany-port847500864/001/test-1725879683072840504
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.822137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  9 11:01 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  9 11:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  9 11:01 test-1725879683072840504
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh cat /mount-9p/test-1725879683072840504
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-598740 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9f49d131-ff9d-49fc-bc1f-2a95da511fe9] Pending
helpers_test.go:344: "busybox-mount" [9f49d131-ff9d-49fc-bc1f-2a95da511fe9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9f49d131-ff9d-49fc-bc1f-2a95da511fe9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9f49d131-ff9d-49fc-bc1f-2a95da511fe9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.023259668s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-598740 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdany-port847500864/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598740 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-598740
docker.io/kicbase/echo-server:functional-598740
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598740 image ls --format short --alsologtostderr:
I0909 11:01:32.097238   73173 out.go:345] Setting OutFile to fd 1 ...
I0909 11:01:32.097524   73173 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.097534   73173 out.go:358] Setting ErrFile to fd 2...
I0909 11:01:32.097539   73173 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.097730   73173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
I0909 11:01:32.098291   73173 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.098393   73173 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.098785   73173 cli_runner.go:164] Run: docker container inspect functional-598740 --format={{.State.Status}}
I0909 11:01:32.116159   73173 ssh_runner.go:195] Run: systemctl --version
I0909 11:01:32.116200   73173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-598740
I0909 11:01:32.132755   73173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/functional-598740/id_rsa Username:docker}
I0909 11:01:32.221140   73173 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598740 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-598740 | a1a284ab59942 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| docker.io/kicbase/echo-server               | functional-598740 | 9056ab77afb8e | 4.94MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598740 image ls --format table --alsologtostderr:
I0909 11:01:34.933791   74572 out.go:345] Setting OutFile to fd 1 ...
I0909 11:01:34.933916   74572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:34.933925   74572 out.go:358] Setting ErrFile to fd 2...
I0909 11:01:34.933929   74572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:34.934108   74572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
I0909 11:01:34.934643   74572 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:34.934734   74572 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:34.935067   74572 cli_runner.go:164] Run: docker container inspect functional-598740 --format={{.State.Status}}
I0909 11:01:34.953170   74572 ssh_runner.go:195] Run: systemctl --version
I0909 11:01:34.953222   74572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-598740
I0909 11:01:34.974445   74572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/functional-598740/id_rsa Username:docker}
I0909 11:01:35.065912   74572 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598740 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags"
:["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"a1a284ab59942515df9319af1a1f94d00cc62565d580572f776f46cbe1743818","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-598740"],"size":"30"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"2
40000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-598740"],"size":"4940000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598740 image ls --format json --alsologtostderr:
I0909 11:01:34.704397   74525 out.go:345] Setting OutFile to fd 1 ...
I0909 11:01:34.704528   74525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:34.704537   74525 out.go:358] Setting ErrFile to fd 2...
I0909 11:01:34.704544   74525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:34.704852   74525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
I0909 11:01:34.705589   74525 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:34.705745   74525 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:34.706300   74525 cli_runner.go:164] Run: docker container inspect functional-598740 --format={{.State.Status}}
I0909 11:01:34.729062   74525 ssh_runner.go:195] Run: systemctl --version
I0909 11:01:34.729120   74525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-598740
I0909 11:01:34.746693   74525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/functional-598740/id_rsa Username:docker}
I0909 11:01:34.853228   74525 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598740 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-598740
size: "4940000"
- id: a1a284ab59942515df9319af1a1f94d00cc62565d580572f776f46cbe1743818
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-598740
size: "30"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598740 image ls --format yaml --alsologtostderr:
I0909 11:01:32.298771   73231 out.go:345] Setting OutFile to fd 1 ...
I0909 11:01:32.298875   73231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.298884   73231 out.go:358] Setting ErrFile to fd 2...
I0909 11:01:32.298887   73231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.299053   73231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
I0909 11:01:32.299577   73231 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.299673   73231 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.300073   73231 cli_runner.go:164] Run: docker container inspect functional-598740 --format={{.State.Status}}
I0909 11:01:32.317670   73231 ssh_runner.go:195] Run: systemctl --version
I0909 11:01:32.317717   73231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-598740
I0909 11:01:32.336315   73231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/functional-598740/id_rsa Username:docker}
I0909 11:01:32.428974   73231 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh pgrep buildkitd: exit status 1 (295.95517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image build -t localhost/my-image:functional-598740 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-598740 image build -t localhost/my-image:functional-598740 testdata/build --alsologtostderr: (3.6124088s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598740 image build -t localhost/my-image:functional-598740 testdata/build --alsologtostderr:
I0909 11:01:32.797020   73636 out.go:345] Setting OutFile to fd 1 ...
I0909 11:01:32.797140   73636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.797148   73636 out.go:358] Setting ErrFile to fd 2...
I0909 11:01:32.797152   73636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0909 11:01:32.797322   73636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
I0909 11:01:32.798036   73636 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.798801   73636 config.go:182] Loaded profile config "functional-598740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0909 11:01:32.799387   73636 cli_runner.go:164] Run: docker container inspect functional-598740 --format={{.State.Status}}
I0909 11:01:32.819028   73636 ssh_runner.go:195] Run: systemctl --version
I0909 11:01:32.819092   73636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-598740
I0909 11:01:32.835745   73636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/functional-598740/id_rsa Username:docker}
I0909 11:01:32.924832   73636 build_images.go:161] Building image from path: /tmp/build.2776585908.tar
I0909 11:01:32.924894   73636 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0909 11:01:32.934741   73636 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2776585908.tar
I0909 11:01:32.939740   73636 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2776585908.tar: stat -c "%s %y" /var/lib/minikube/build/build.2776585908.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2776585908.tar': No such file or directory
I0909 11:01:32.939775   73636 ssh_runner.go:362] scp /tmp/build.2776585908.tar --> /var/lib/minikube/build/build.2776585908.tar (3072 bytes)
I0909 11:01:32.963270   73636 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2776585908
I0909 11:01:32.971965   73636 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2776585908 -xf /var/lib/minikube/build/build.2776585908.tar
I0909 11:01:32.982168   73636 docker.go:360] Building image: /var/lib/minikube/build/build.2776585908
I0909 11:01:32.982252   73636 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-598740 /var/lib/minikube/build/build.2776585908
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ad85e0a0eb356171ffedccfa8e313c14bcd7cc8bf582ce60a7072ea242a83f82 done
#8 naming to localhost/my-image:functional-598740 done
#8 DONE 0.0s
I0909 11:01:36.309155   73636 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-598740 /var/lib/minikube/build/build.2776585908: (3.326867672s)
I0909 11:01:36.309244   73636 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2776585908
I0909 11:01:36.319206   73636 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2776585908.tar
I0909 11:01:36.358580   73636 build_images.go:217] Built localhost/my-image:functional-598740 from /tmp/build.2776585908.tar
I0909 11:01:36.358616   73636 build_images.go:133] succeeded building to: functional-598740
I0909 11:01:36.358623   73636 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.884719312s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-598740
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image load --daemon kicbase/echo-server:functional-598740 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image load --daemon kicbase/echo-server:functional-598740 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-598740 docker-env) && out/minikube-linux-amd64 status -p functional-598740"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-598740 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-598740
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image load --daemon kicbase/echo-server:functional-598740 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image save kicbase/echo-server:functional-598740 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image rm kicbase/echo-server:functional-598740 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-598740
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 image save --daemon kicbase/echo-server:functional-598740 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-598740
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdspecific-port126823712/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.808306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdspecific-port126823712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh "sudo umount -f /mount-9p": exit status 1 (250.52032ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-598740 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdspecific-port126823712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T" /mount1: exit status 1 (281.708112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598740 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-598740 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598740 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1251503370/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-598740
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-598740
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-598740
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-743551 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0909 11:02:44.831706   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:44.838697   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:44.850014   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:44.871363   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:44.912737   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:44.994130   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:45.156249   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:45.477759   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:46.119870   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:47.401289   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:49.963338   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:02:55.084985   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:03:05.326330   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-743551 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.695671654s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
E0909 11:03:25.808328   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (100.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-743551 -- rollout status deployment/busybox: (4.508874494s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-945qn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-d4lc9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-h672r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-945qn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-d4lc9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-h672r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-945qn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-d4lc9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-h672r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-945qn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-945qn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-d4lc9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-d4lc9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-h672r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-743551 -- exec busybox-7dff88458-h672r -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-743551 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-743551 -v=7 --alsologtostderr: (19.353974474s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-743551 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp testdata/cp-test.txt ha-743551:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3530091661/001/cp-test_ha-743551.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551:/home/docker/cp-test.txt ha-743551-m02:/home/docker/cp-test_ha-743551_ha-743551-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test_ha-743551_ha-743551-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551:/home/docker/cp-test.txt ha-743551-m03:/home/docker/cp-test_ha-743551_ha-743551-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test_ha-743551_ha-743551-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551:/home/docker/cp-test.txt ha-743551-m04:/home/docker/cp-test_ha-743551_ha-743551-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test_ha-743551_ha-743551-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp testdata/cp-test.txt ha-743551-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3530091661/001/cp-test_ha-743551-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m02:/home/docker/cp-test.txt ha-743551:/home/docker/cp-test_ha-743551-m02_ha-743551.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test_ha-743551-m02_ha-743551.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m02:/home/docker/cp-test.txt ha-743551-m03:/home/docker/cp-test_ha-743551-m02_ha-743551-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test_ha-743551-m02_ha-743551-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m02:/home/docker/cp-test.txt ha-743551-m04:/home/docker/cp-test_ha-743551-m02_ha-743551-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test_ha-743551-m02_ha-743551-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp testdata/cp-test.txt ha-743551-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3530091661/001/cp-test_ha-743551-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m03:/home/docker/cp-test.txt ha-743551:/home/docker/cp-test_ha-743551-m03_ha-743551.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test_ha-743551-m03_ha-743551.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m03:/home/docker/cp-test.txt ha-743551-m02:/home/docker/cp-test_ha-743551-m03_ha-743551-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test_ha-743551-m03_ha-743551-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m03:/home/docker/cp-test.txt ha-743551-m04:/home/docker/cp-test_ha-743551-m03_ha-743551-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test_ha-743551-m03_ha-743551-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp testdata/cp-test.txt ha-743551-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3530091661/001/cp-test_ha-743551-m04.txt
E0909 11:04:06.770749   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m04:/home/docker/cp-test.txt ha-743551:/home/docker/cp-test_ha-743551-m04_ha-743551.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551 "sudo cat /home/docker/cp-test_ha-743551-m04_ha-743551.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m04:/home/docker/cp-test.txt ha-743551-m02:/home/docker/cp-test_ha-743551-m04_ha-743551-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m02 "sudo cat /home/docker/cp-test_ha-743551-m04_ha-743551-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 cp ha-743551-m04:/home/docker/cp-test.txt ha-743551-m03:/home/docker/cp-test_ha-743551-m04_ha-743551-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 ssh -n ha-743551-m03 "sudo cat /home/docker/cp-test_ha-743551-m04_ha-743551-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-743551 node stop m02 -v=7 --alsologtostderr: (10.735078846s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr: exit status 7 (623.302202ms)

                                                
                                                
-- stdout --
	ha-743551
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-743551-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-743551-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-743551-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:04:20.638800  101818 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:04:20.638898  101818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:04:20.638905  101818 out.go:358] Setting ErrFile to fd 2...
	I0909 11:04:20.638910  101818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:04:20.639111  101818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:04:20.639264  101818 out.go:352] Setting JSON to false
	I0909 11:04:20.639288  101818 mustload.go:65] Loading cluster: ha-743551
	I0909 11:04:20.639335  101818 notify.go:220] Checking for updates...
	I0909 11:04:20.639754  101818 config.go:182] Loaded profile config "ha-743551": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:04:20.639773  101818 status.go:255] checking status of ha-743551 ...
	I0909 11:04:20.640263  101818 cli_runner.go:164] Run: docker container inspect ha-743551 --format={{.State.Status}}
	I0909 11:04:20.658340  101818 status.go:330] ha-743551 host status = "Running" (err=<nil>)
	I0909 11:04:20.658363  101818 host.go:66] Checking if "ha-743551" exists ...
	I0909 11:04:20.658653  101818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-743551
	I0909 11:04:20.676199  101818 host.go:66] Checking if "ha-743551" exists ...
	I0909 11:04:20.676430  101818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:04:20.676464  101818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-743551
	I0909 11:04:20.693388  101818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/ha-743551/id_rsa Username:docker}
	I0909 11:04:20.785422  101818 ssh_runner.go:195] Run: systemctl --version
	I0909 11:04:20.789104  101818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:04:20.799227  101818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:04:20.845045  101818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-09 11:04:20.835878846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 11:04:20.845803  101818 kubeconfig.go:125] found "ha-743551" server: "https://192.168.49.254:8443"
	I0909 11:04:20.845835  101818 api_server.go:166] Checking apiserver status ...
	I0909 11:04:20.845883  101818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:04:20.856574  101818 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2407/cgroup
	I0909 11:04:20.864978  101818 api_server.go:182] apiserver freezer: "5:freezer:/docker/e53cd1192e3e76fe7988b7b767860ad0287c945e97184ebcef4ee7e4a755979a/kubepods/burstable/pod83d7f5f0bcf88c609b6a133be08bcbf1/1e82567068e705ae68863c50bd8c1d6a0b04613ae6263ab7ef76b4a92f823daf"
	I0909 11:04:20.865037  101818 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e53cd1192e3e76fe7988b7b767860ad0287c945e97184ebcef4ee7e4a755979a/kubepods/burstable/pod83d7f5f0bcf88c609b6a133be08bcbf1/1e82567068e705ae68863c50bd8c1d6a0b04613ae6263ab7ef76b4a92f823daf/freezer.state
	I0909 11:04:20.872768  101818 api_server.go:204] freezer state: "THAWED"
	I0909 11:04:20.872805  101818 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0909 11:04:20.876311  101818 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0909 11:04:20.876332  101818 status.go:422] ha-743551 apiserver status = Running (err=<nil>)
	I0909 11:04:20.876342  101818 status.go:257] ha-743551 status: &{Name:ha-743551 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:04:20.876357  101818 status.go:255] checking status of ha-743551-m02 ...
	I0909 11:04:20.876584  101818 cli_runner.go:164] Run: docker container inspect ha-743551-m02 --format={{.State.Status}}
	I0909 11:04:20.895058  101818 status.go:330] ha-743551-m02 host status = "Stopped" (err=<nil>)
	I0909 11:04:20.895077  101818 status.go:343] host is not running, skipping remaining checks
	I0909 11:04:20.895085  101818 status.go:257] ha-743551-m02 status: &{Name:ha-743551-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:04:20.895120  101818 status.go:255] checking status of ha-743551-m03 ...
	I0909 11:04:20.895337  101818 cli_runner.go:164] Run: docker container inspect ha-743551-m03 --format={{.State.Status}}
	I0909 11:04:20.911110  101818 status.go:330] ha-743551-m03 host status = "Running" (err=<nil>)
	I0909 11:04:20.911132  101818 host.go:66] Checking if "ha-743551-m03" exists ...
	I0909 11:04:20.911361  101818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-743551-m03
	I0909 11:04:20.928365  101818 host.go:66] Checking if "ha-743551-m03" exists ...
	I0909 11:04:20.928649  101818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:04:20.928688  101818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-743551-m03
	I0909 11:04:20.945358  101818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/ha-743551-m03/id_rsa Username:docker}
	I0909 11:04:21.033290  101818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:04:21.043600  101818 kubeconfig.go:125] found "ha-743551" server: "https://192.168.49.254:8443"
	I0909 11:04:21.043623  101818 api_server.go:166] Checking apiserver status ...
	I0909 11:04:21.043659  101818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:04:21.053472  101818 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2276/cgroup
	I0909 11:04:21.061740  101818 api_server.go:182] apiserver freezer: "5:freezer:/docker/f1677d3904c4a9555938b05a6fa91e7da315065547361cdf9d3d12410cbe8efd/kubepods/burstable/pod504fe1933e3fa17cf272f83e0373b1d8/cc1055768e89622f28929e3c36b873c9dbfa0712ffbe71838e0de99ce7155779"
	I0909 11:04:21.061805  101818 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f1677d3904c4a9555938b05a6fa91e7da315065547361cdf9d3d12410cbe8efd/kubepods/burstable/pod504fe1933e3fa17cf272f83e0373b1d8/cc1055768e89622f28929e3c36b873c9dbfa0712ffbe71838e0de99ce7155779/freezer.state
	I0909 11:04:21.069479  101818 api_server.go:204] freezer state: "THAWED"
	I0909 11:04:21.069502  101818 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0909 11:04:21.073116  101818 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0909 11:04:21.073138  101818 status.go:422] ha-743551-m03 apiserver status = Running (err=<nil>)
	I0909 11:04:21.073148  101818 status.go:257] ha-743551-m03 status: &{Name:ha-743551-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:04:21.073183  101818 status.go:255] checking status of ha-743551-m04 ...
	I0909 11:04:21.073446  101818 cli_runner.go:164] Run: docker container inspect ha-743551-m04 --format={{.State.Status}}
	I0909 11:04:21.090903  101818 status.go:330] ha-743551-m04 host status = "Running" (err=<nil>)
	I0909 11:04:21.090924  101818 host.go:66] Checking if "ha-743551-m04" exists ...
	I0909 11:04:21.091176  101818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-743551-m04
	I0909 11:04:21.108028  101818 host.go:66] Checking if "ha-743551-m04" exists ...
	I0909 11:04:21.108262  101818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:04:21.108313  101818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-743551-m04
	I0909 11:04:21.125098  101818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/ha-743551-m04/id_rsa Username:docker}
	I0909 11:04:21.209258  101818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:04:21.219897  101818 status.go:257] ha-743551-m04 status: &{Name:ha-743551-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-743551 node start m02 -v=7 --alsologtostderr: (38.553487625s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.799922607s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (218.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-743551 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-743551 -v=7 --alsologtostderr
E0909 11:05:28.692761   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-743551 -v=7 --alsologtostderr: (33.460798107s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-743551 --wait=true -v=7 --alsologtostderr
E0909 11:05:59.340793   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.347212   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.358605   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.379999   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.421403   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.502818   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.664384   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:05:59.986786   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:00.628651   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:01.910729   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:04.472047   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:09.593344   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:19.835662   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:06:40.317192   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:07:21.279070   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:07:44.831143   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:08:12.534548   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-743551 --wait=true -v=7 --alsologtostderr: (3m4.612204718s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-743551
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (218.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 node delete m03 -v=7 --alsologtostderr
E0909 11:08:43.200736   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-743551 node delete m03 -v=7 --alsologtostderr: (8.518941337s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-743551 stop -v=7 --alsologtostderr: (32.450825711s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr: exit status 7 (98.681563ms)

                                                
                                                
-- stdout --
	ha-743551
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-743551-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-743551-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:09:23.336409  132494 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:09:23.336541  132494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:09:23.336552  132494 out.go:358] Setting ErrFile to fd 2...
	I0909 11:09:23.336559  132494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:09:23.336765  132494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:09:23.336962  132494 out.go:352] Setting JSON to false
	I0909 11:09:23.336986  132494 mustload.go:65] Loading cluster: ha-743551
	I0909 11:09:23.337035  132494 notify.go:220] Checking for updates...
	I0909 11:09:23.337336  132494 config.go:182] Loaded profile config "ha-743551": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:09:23.337348  132494 status.go:255] checking status of ha-743551 ...
	I0909 11:09:23.337744  132494 cli_runner.go:164] Run: docker container inspect ha-743551 --format={{.State.Status}}
	I0909 11:09:23.355917  132494 status.go:330] ha-743551 host status = "Stopped" (err=<nil>)
	I0909 11:09:23.355936  132494 status.go:343] host is not running, skipping remaining checks
	I0909 11:09:23.355942  132494 status.go:257] ha-743551 status: &{Name:ha-743551 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:09:23.355972  132494 status.go:255] checking status of ha-743551-m02 ...
	I0909 11:09:23.356194  132494 cli_runner.go:164] Run: docker container inspect ha-743551-m02 --format={{.State.Status}}
	I0909 11:09:23.374372  132494 status.go:330] ha-743551-m02 host status = "Stopped" (err=<nil>)
	I0909 11:09:23.374421  132494 status.go:343] host is not running, skipping remaining checks
	I0909 11:09:23.374434  132494 status.go:257] ha-743551-m02 status: &{Name:ha-743551-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:09:23.374461  132494 status.go:255] checking status of ha-743551-m04 ...
	I0909 11:09:23.374708  132494 cli_runner.go:164] Run: docker container inspect ha-743551-m04 --format={{.State.Status}}
	I0909 11:09:23.391940  132494 status.go:330] ha-743551-m04 host status = "Stopped" (err=<nil>)
	I0909 11:09:23.391962  132494 status.go:343] host is not running, skipping remaining checks
	I0909 11:09:23.391970  132494 status.go:257] ha-743551-m04 status: &{Name:ha-743551-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (48.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-743551 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-743551 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (47.387043531s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (48.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (54.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-743551 --control-plane -v=7 --alsologtostderr
E0909 11:10:59.340764   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-743551 --control-plane -v=7 --alsologtostderr: (54.153863513s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-743551 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (54.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-343278 --driver=docker  --container-runtime=docker
E0909 11:11:27.042323   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-343278 --driver=docker  --container-runtime=docker: (24.511542803s)
--- PASS: TestImageBuild/serial/Setup (24.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-343278
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-343278: (2.387508221s)
--- PASS: TestImageBuild/serial/NormalBuild (2.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-343278
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-343278
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-343278
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-575131 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0909 11:12:44.831827   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-575131 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (59.633850351s)
--- PASS: TestJSONOutput/start/Command (59.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-575131 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-575131 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-575131 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-575131 --output=json --user=testUser: (10.734564657s)
--- PASS: TestJSONOutput/stop/Command (10.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-835691 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-835691 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.21181ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"018e41f5-021f-47a4-a209-e1c692a253a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-835691] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6128cc9-146b-4e27-bdce-8dc5647b081d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19584"}}
	{"specversion":"1.0","id":"bcfc7019-933e-47f5-841a-cadb437a5360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"986ee6b8-2e9e-4e57-b512-5b964c1e4bd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig"}}
	{"specversion":"1.0","id":"d124b80c-524a-4441-a09a-aebab37a9ce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube"}}
	{"specversion":"1.0","id":"1e947810-0e1c-4296-bc8c-6ee41e6287ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"636eb03b-224b-467b-abd6-93f0def05383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0658f37d-b488-4066-a688-6f6a99be9585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-835691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-835691
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-172904 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-172904 --network=: (23.309058957s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-172904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-172904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-172904: (1.945833351s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-107298 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-107298 --network=bridge: (20.461003345s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-107298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-107298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-107298: (1.881561796s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.36s)

                                                
                                    
x
+
TestKicExistingNetwork (22.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-549023 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-549023 --network=existing-network: (20.018670655s)
helpers_test.go:175: Cleaning up "existing-network-549023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-549023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-549023: (1.852113939s)
--- PASS: TestKicExistingNetwork (22.01s)

                                                
                                    
x
+
TestKicCustomSubnet (22.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-493849 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-493849 --subnet=192.168.60.0/24: (20.365395729s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-493849 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-493849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-493849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-493849: (2.023591102s)
--- PASS: TestKicCustomSubnet (22.41s)

                                                
                                    
x
+
TestKicStaticIP (22.2s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-638487 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-638487 --static-ip=192.168.200.200: (20.048965687s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-638487 ip
helpers_test.go:175: Cleaning up "static-ip-638487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-638487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-638487: (2.037656381s)
--- PASS: TestKicStaticIP (22.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-828878 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-828878 --driver=docker  --container-runtime=docker: (20.899543223s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-831566 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-831566 --driver=docker  --container-runtime=docker: (21.80761988s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-828878
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-831566
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-831566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-831566
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-831566: (2.047233623s)
helpers_test.go:175: Cleaning up "first-828878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-828878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-828878: (1.970133918s)
--- PASS: TestMinikubeProfile (47.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-696198 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-696198 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.872898885s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-696198 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-709024 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-709024 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.920686172s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709024 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-696198 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-696198 --alsologtostderr -v=5: (1.436293033s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709024 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-709024
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-709024: (1.162909775s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-709024
E0909 11:15:59.341281   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-709024: (7.388756218s)
--- PASS: TestMountStart/serial/RestartStopped (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709024 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415331 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415331 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.471950628s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (49.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-415331 -- rollout status deployment/busybox: (3.158720773s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0909 11:17:44.831782   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-rlqkf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-wldq2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-rlqkf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-wldq2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-rlqkf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-wldq2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (49.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-rlqkf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-rlqkf -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-wldq2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415331 -- exec busybox-7dff88458-wldq2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415331 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-415331 -v 3 --alsologtostderr: (14.911879012s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-415331 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp testdata/cp-test.txt multinode-415331:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1784989320/001/cp-test_multinode-415331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331:/home/docker/cp-test.txt multinode-415331-m02:/home/docker/cp-test_multinode-415331_multinode-415331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test_multinode-415331_multinode-415331-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331:/home/docker/cp-test.txt multinode-415331-m03:/home/docker/cp-test_multinode-415331_multinode-415331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test_multinode-415331_multinode-415331-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp testdata/cp-test.txt multinode-415331-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1784989320/001/cp-test_multinode-415331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m02:/home/docker/cp-test.txt multinode-415331:/home/docker/cp-test_multinode-415331-m02_multinode-415331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test_multinode-415331-m02_multinode-415331.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m02:/home/docker/cp-test.txt multinode-415331-m03:/home/docker/cp-test_multinode-415331-m02_multinode-415331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test_multinode-415331-m02_multinode-415331-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp testdata/cp-test.txt multinode-415331-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1784989320/001/cp-test_multinode-415331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m03:/home/docker/cp-test.txt multinode-415331:/home/docker/cp-test_multinode-415331-m03_multinode-415331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331 "sudo cat /home/docker/cp-test_multinode-415331-m03_multinode-415331.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 cp multinode-415331-m03:/home/docker/cp-test.txt multinode-415331-m02:/home/docker/cp-test_multinode-415331-m03_multinode-415331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 ssh -n multinode-415331-m02 "sudo cat /home/docker/cp-test_multinode-415331-m03_multinode-415331-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-415331 node stop m03: (1.193918115s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415331 status: exit status 7 (445.333922ms)

                                                
                                                
-- stdout --
	multinode-415331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr: exit status 7 (440.554967ms)

                                                
                                                
-- stdout --
	multinode-415331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:18:22.062787  219083 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:18:22.062901  219083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:18:22.062910  219083 out.go:358] Setting ErrFile to fd 2...
	I0909 11:18:22.062913  219083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:18:22.063076  219083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:18:22.063228  219083 out.go:352] Setting JSON to false
	I0909 11:18:22.063250  219083 mustload.go:65] Loading cluster: multinode-415331
	I0909 11:18:22.063370  219083 notify.go:220] Checking for updates...
	I0909 11:18:22.063590  219083 config.go:182] Loaded profile config "multinode-415331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:18:22.063604  219083 status.go:255] checking status of multinode-415331 ...
	I0909 11:18:22.063974  219083 cli_runner.go:164] Run: docker container inspect multinode-415331 --format={{.State.Status}}
	I0909 11:18:22.083644  219083 status.go:330] multinode-415331 host status = "Running" (err=<nil>)
	I0909 11:18:22.083674  219083 host.go:66] Checking if "multinode-415331" exists ...
	I0909 11:18:22.083976  219083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415331
	I0909 11:18:22.099982  219083 host.go:66] Checking if "multinode-415331" exists ...
	I0909 11:18:22.100231  219083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:18:22.100284  219083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415331
	I0909 11:18:22.116035  219083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/multinode-415331/id_rsa Username:docker}
	I0909 11:18:22.201275  219083 ssh_runner.go:195] Run: systemctl --version
	I0909 11:18:22.204923  219083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:18:22.215207  219083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0909 11:18:22.263292  219083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-09 11:18:22.254338047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0909 11:18:22.263825  219083 kubeconfig.go:125] found "multinode-415331" server: "https://192.168.67.2:8443"
	I0909 11:18:22.263861  219083 api_server.go:166] Checking apiserver status ...
	I0909 11:18:22.263897  219083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0909 11:18:22.274526  219083 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2314/cgroup
	I0909 11:18:22.282872  219083 api_server.go:182] apiserver freezer: "5:freezer:/docker/5c7d00e99951c9fb3cfd91638195ad77d7fa35c190301987ef2cbf16231d6259/kubepods/burstable/podc147263c7d2e79d850333f61d6aec707/08e15da40d0ecd31f73676714c16c365fda40526b9cac7e9d7c13567c093da3c"
	I0909 11:18:22.282926  219083 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c7d00e99951c9fb3cfd91638195ad77d7fa35c190301987ef2cbf16231d6259/kubepods/burstable/podc147263c7d2e79d850333f61d6aec707/08e15da40d0ecd31f73676714c16c365fda40526b9cac7e9d7c13567c093da3c/freezer.state
	I0909 11:18:22.290416  219083 api_server.go:204] freezer state: "THAWED"
	I0909 11:18:22.290442  219083 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0909 11:18:22.294199  219083 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0909 11:18:22.294222  219083 status.go:422] multinode-415331 apiserver status = Running (err=<nil>)
	I0909 11:18:22.294235  219083 status.go:257] multinode-415331 status: &{Name:multinode-415331 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:18:22.294263  219083 status.go:255] checking status of multinode-415331-m02 ...
	I0909 11:18:22.294549  219083 cli_runner.go:164] Run: docker container inspect multinode-415331-m02 --format={{.State.Status}}
	I0909 11:18:22.310942  219083 status.go:330] multinode-415331-m02 host status = "Running" (err=<nil>)
	I0909 11:18:22.310966  219083 host.go:66] Checking if "multinode-415331-m02" exists ...
	I0909 11:18:22.311257  219083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415331-m02
	I0909 11:18:22.327328  219083 host.go:66] Checking if "multinode-415331-m02" exists ...
	I0909 11:18:22.327571  219083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0909 11:18:22.327601  219083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415331-m02
	I0909 11:18:22.345276  219083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19584-8635/.minikube/machines/multinode-415331-m02/id_rsa Username:docker}
	I0909 11:18:22.433243  219083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0909 11:18:22.443274  219083 status.go:257] multinode-415331-m02 status: &{Name:multinode-415331-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:18:22.443315  219083 status.go:255] checking status of multinode-415331-m03 ...
	I0909 11:18:22.443547  219083 cli_runner.go:164] Run: docker container inspect multinode-415331-m03 --format={{.State.Status}}
	I0909 11:18:22.460915  219083 status.go:330] multinode-415331-m03 host status = "Stopped" (err=<nil>)
	I0909 11:18:22.460937  219083 status.go:343] host is not running, skipping remaining checks
	I0909 11:18:22.460951  219083 status.go:257] multinode-415331-m03 status: &{Name:multinode-415331-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-415331 node start m03 -v=7 --alsologtostderr: (8.931908226s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415331
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-415331
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-415331: (22.410708201s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415331 --wait=true -v=8 --alsologtostderr
E0909 11:19:07.896300   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415331 --wait=true -v=8 --alsologtostderr: (1m20.606557576s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415331
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-415331 node delete m03: (4.578244286s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-415331 stop: (21.162317427s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415331 status: exit status 7 (77.071237ms)

                                                
                                                
-- stdout --
	multinode-415331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr: exit status 7 (76.980015ms)

                                                
                                                
-- stdout --
	multinode-415331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0909 11:20:41.518453  234425 out.go:345] Setting OutFile to fd 1 ...
	I0909 11:20:41.518548  234425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:20:41.518556  234425 out.go:358] Setting ErrFile to fd 2...
	I0909 11:20:41.518560  234425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0909 11:20:41.518770  234425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19584-8635/.minikube/bin
	I0909 11:20:41.518921  234425 out.go:352] Setting JSON to false
	I0909 11:20:41.518943  234425 mustload.go:65] Loading cluster: multinode-415331
	I0909 11:20:41.518984  234425 notify.go:220] Checking for updates...
	I0909 11:20:41.519330  234425 config.go:182] Loaded profile config "multinode-415331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0909 11:20:41.519351  234425 status.go:255] checking status of multinode-415331 ...
	I0909 11:20:41.519778  234425 cli_runner.go:164] Run: docker container inspect multinode-415331 --format={{.State.Status}}
	I0909 11:20:41.538330  234425 status.go:330] multinode-415331 host status = "Stopped" (err=<nil>)
	I0909 11:20:41.538348  234425 status.go:343] host is not running, skipping remaining checks
	I0909 11:20:41.538354  234425 status.go:257] multinode-415331 status: &{Name:multinode-415331 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0909 11:20:41.538393  234425 status.go:255] checking status of multinode-415331-m02 ...
	I0909 11:20:41.538624  234425 cli_runner.go:164] Run: docker container inspect multinode-415331-m02 --format={{.State.Status}}
	I0909 11:20:41.554613  234425 status.go:330] multinode-415331-m02 host status = "Stopped" (err=<nil>)
	I0909 11:20:41.554654  234425 status.go:343] host is not running, skipping remaining checks
	I0909 11:20:41.554666  234425 status.go:257] multinode-415331-m02 status: &{Name:multinode-415331-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415331 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0909 11:20:59.341202   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415331 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (52.210640201s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415331 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415331
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415331-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-415331-m02 --driver=docker  --container-runtime=docker: exit status 14 (56.897324ms)

                                                
                                                
-- stdout --
	* [multinode-415331-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-415331-m02' is duplicated with machine name 'multinode-415331-m02' in profile 'multinode-415331'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415331-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415331-m03 --driver=docker  --container-runtime=docker: (20.216472969s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415331
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-415331: exit status 80 (260.941951ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-415331 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-415331-m03 already exists in multinode-415331-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-415331-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-415331-m03: (2.015341413s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.59s)

                                                
                                    
x
+
TestPreload (118.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455026 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0909 11:22:22.404564   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:22:44.830982   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455026 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (52.645193904s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455026 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-455026 image pull gcr.io/k8s-minikube/busybox: (1.822896377s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-455026
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-455026: (10.658327913s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455026 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455026 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (50.72900886s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455026 image list
helpers_test.go:175: Cleaning up "test-preload-455026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-455026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-455026: (2.065070009s)
--- PASS: TestPreload (118.11s)

                                                
                                    
x
+
TestScheduledStopUnix (96.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-293663 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-293663 --memory=2048 --driver=docker  --container-runtime=docker: (23.557257674s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293663 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-293663 -n scheduled-stop-293663
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293663 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293663 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293663 -n scheduled-stop-293663
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293663
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293663 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293663
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-293663: exit status 7 (58.314206ms)

                                                
                                                
-- stdout --
	scheduled-stop-293663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293663 -n scheduled-stop-293663
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293663 -n scheduled-stop-293663: exit status 7 (59.837074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-293663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-293663
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-293663: (1.631488058s)
--- PASS: TestScheduledStopUnix (96.38s)

                                                
                                    
x
+
TestSkaffold (101.72s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2266870875 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-591364 --memory=2600 --driver=docker  --container-runtime=docker
E0909 11:25:59.341028   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-591364 --memory=2600 --driver=docker  --container-runtime=docker: (22.043845534s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2266870875 run --minikube-profile skaffold-591364 --kube-context skaffold-591364 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2266870875 run --minikube-profile skaffold-591364 --kube-context skaffold-591364 --status-check=true --port-forward=false --interactive=false: (1m3.153757662s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-55c9748845-8hpk7" [a7414f4e-5859-4538-92ab-c98e1dc65253] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004080784s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5cb588fcc7-ttzzv" [26c0117a-3e2e-4352-82fd-2570775920f5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003835169s
helpers_test.go:175: Cleaning up "skaffold-591364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-591364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-591364: (2.688839846s)
--- PASS: TestSkaffold (101.72s)

                                                
                                    
x
+
TestInsufficientStorage (9.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-772608 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-772608 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.727419346s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8cb9de18-8322-48e8-8a2b-f212aeea5e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-772608] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eda94555-d919-4f09-8827-02dc7a9940ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19584"}}
	{"specversion":"1.0","id":"b3f1a4f2-51fe-4db1-a8b1-808764e978fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"142d0982-0afb-49a2-af6f-c3ee716e36ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig"}}
	{"specversion":"1.0","id":"a326e2f6-71a8-435a-bb02-b96208d657f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube"}}
	{"specversion":"1.0","id":"9124f3a2-4246-415c-aa6a-af26b8c6df34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"243b15cf-d27e-4944-a806-567ad045fabf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"de4c85b2-b140-417d-a7ff-4172209a366b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b1f167c2-16a6-4f8f-afac-e3436a3c3e7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e3567507-a520-4c10-bf8e-0265577e0c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"124ba977-fdd1-4f82-b53f-2bd241462a65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"24d5068f-d3b5-4374-8156-b599854ca090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-772608\" primary control-plane node in \"insufficient-storage-772608\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee13e092-68a0-4c51-afd8-c5a354cd213e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bb5651a-805b-47dd-aee4-b339a45b3b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e00848d-9173-4777-89b5-08f45fbb58e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-772608 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-772608 --output=json --layout=cluster: exit status 7 (243.607278ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-772608","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-772608","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0909 11:27:24.946837  274592 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-772608" does not appear in /home/jenkins/minikube-integration/19584-8635/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-772608 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-772608 --output=json --layout=cluster: exit status 7 (245.446515ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-772608","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-772608","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0909 11:27:25.193328  274689 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-772608" does not appear in /home/jenkins/minikube-integration/19584-8635/kubeconfig
	E0909 11:27:25.202532  274689 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/insufficient-storage-772608/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-772608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-772608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-772608: (1.621487116s)
--- PASS: TestInsufficientStorage (9.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (153.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1660867235 start -p running-upgrade-317202 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1660867235 start -p running-upgrade-317202 --memory=2200 --vm-driver=docker  --container-runtime=docker: (2m4.757285831s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-317202 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-317202 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.562141537s)
helpers_test.go:175: Cleaning up "running-upgrade-317202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-317202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-317202: (3.468040707s)
--- PASS: TestRunningBinaryUpgrade (153.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (173.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.063796039s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-668805
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-668805: (10.666648659s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-668805 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-668805 status --format={{.Host}}: exit status 7 (70.893999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0909 11:30:59.340687   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m42.799607801s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-668805 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (57.866483ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-668805] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-668805
	    minikube start -p kubernetes-upgrade-668805 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6688052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-668805 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0909 11:32:44.256462   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:44.831438   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-668805 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.260614204s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-668805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-668805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-668805: (2.585455276s)
--- PASS: TestKubernetesUpgrade (173.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (107.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3945171152 start -p missing-upgrade-474538 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3945171152 start -p missing-upgrade-474538 --memory=2200 --driver=docker  --container-runtime=docker: (42.510256249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-474538
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-474538: (10.437839047s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-474538
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-474538 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-474538 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.663550625s)
helpers_test.go:175: Cleaning up "missing-upgrade-474538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-474538
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-474538: (4.646287686s)
--- PASS: TestMissingContainerUpgrade (107.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (62.042971ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-971717] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19584
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19584-8635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19584-8635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971717 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971717 --driver=docker  --container-runtime=docker: (26.465235534s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-971717 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.308785751 start -p stopped-upgrade-990499 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0909 11:27:44.831607   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.308785751 start -p stopped-upgrade-990499 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m55.622513146s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.308785751 -p stopped-upgrade-990499 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.308785751 -p stopped-upgrade-990499 stop: (10.747092807s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-990499 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-990499 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.752281353s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --driver=docker  --container-runtime=docker: (5.048443856s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-971717 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-971717 status -o json: exit status 2 (252.482686ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-971717","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-971717
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-971717: (1.643831876s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971717 --no-kubernetes --driver=docker  --container-runtime=docker: (6.311759019s)
--- PASS: TestNoKubernetes/serial/Start (6.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-971717 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-971717 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.392648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-971717
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-971717: (1.162753943s)
--- PASS: TestNoKubernetes/serial/Stop (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (12.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971717 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971717 --driver=docker  --container-runtime=docker: (12.14476364s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (12.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-971717 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-971717 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.780182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestPause/serial/Start (34.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-350274 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-350274 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (34.620984563s)
--- PASS: TestPause/serial/Start (34.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-350274 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-350274 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.988799222s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-350274 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-350274 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-350274 --output=json --layout=cluster: exit status 2 (271.819711ms)

                                                
                                                
-- stdout --
	{"Name":"pause-350274","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-350274","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.45s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-350274 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.45s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-350274 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-350274 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-350274 --alsologtostderr -v=5: (2.099175991s)
--- PASS: TestPause/serial/DeletePaused (2.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (18.146589865s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-350274
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-350274: exit status 1 (27.10785ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-350274: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-990499
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-990499: (1.248406756s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-394892 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-394892 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m49.855819158s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-424368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0909 11:32:03.279678   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.286046   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.297372   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.318733   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.360143   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.441596   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.603098   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:03.925079   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:04.567210   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:05.848728   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:08.410419   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:13.532404   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:32:23.774449   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-424368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m6.589276377s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-424368 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd4c3d93-5bb4-406c-acf5-67a5b5c30c76] Pending
helpers_test.go:344: "busybox" [dd4c3d93-5bb4-406c-acf5-67a5b5c30c76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd4c3d93-5bb4-406c-acf5-67a5b5c30c76] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003777884s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-424368 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-424368 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-424368 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-424368 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-424368 --alsologtostderr -v=3: (10.701636574s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-424368 -n no-preload-424368
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-424368 -n no-preload-424368: exit status 7 (103.613049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-424368 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-424368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-424368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.584772515s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-424368 -n no-preload-424368
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-394892 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7dc6a647-a64a-4661-8f6f-f5f3caf83183] Pending
helpers_test.go:344: "busybox" [7dc6a647-a64a-4661-8f6f-f5f3caf83183] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7dc6a647-a64a-4661-8f6f-f5f3caf83183] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003655445s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-394892 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-394892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-394892 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-394892 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-394892 --alsologtostderr -v=3: (10.870232198s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m12.118255451s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394892 -n old-k8s-version-394892
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394892 -n old-k8s-version-394892: exit status 7 (68.8668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-394892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (126.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-394892 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0909 11:33:25.218582   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-394892 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m6.09099018s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-394892 -n old-k8s-version-394892
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (126.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-386910 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c99d657-0ef5-412e-9ac5-297aad750544] Pending
helpers_test.go:344: "busybox" [3c99d657-0ef5-412e-9ac5-297aad750544] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3c99d657-0ef5-412e-9ac5-297aad750544] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004039403s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-386910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-352066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-352066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (37.49138755s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.928145697s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-386910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-386910 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-386910 --alsologtostderr -v=3: (10.838619257s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386910 -n embed-certs-386910
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386910 -n embed-certs-386910: exit status 7 (82.290252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-386910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (264.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0909 11:34:47.140071   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m23.685339938s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386910 -n embed-certs-386910
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (264.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-352066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab87783d-1fb7-4658-abca-bda415d80fb1] Pending
helpers_test.go:344: "busybox" [ab87783d-1fb7-4658-abca-bda415d80fb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab87783d-1fb7-4658-abca-bda415d80fb1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003816681s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-352066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-352066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-352066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-352066 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-352066 --alsologtostderr -v=3: (10.693327988s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066: exit status 7 (109.243831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-352066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-574n4" [bc77fd47-0e65-4808-895d-62a2b0a1178b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003923079s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-352066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-352066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (5m6.501025798s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-574n4" [bc77fd47-0e65-4808-895d-62a2b0a1178b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004053311s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-394892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-394892 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-394892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394892 -n old-k8s-version-394892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394892 -n old-k8s-version-394892: exit status 2 (328.533456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394892 -n old-k8s-version-394892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394892 -n old-k8s-version-394892: exit status 2 (334.431689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-394892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-394892 -n old-k8s-version-394892
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-394892 -n old-k8s-version-394892
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-900452 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0909 11:35:47.897587   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:35:59.340268   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-900452 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (28.935620498s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-900452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-900452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047258947s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-900452 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-900452 --alsologtostderr -v=3: (10.739080537s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-900452 -n newest-cni-900452
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-900452 -n newest-cni-900452: exit status 7 (75.634901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-900452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-900452 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-900452 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (13.450597383s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-900452 -n newest-cni-900452
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-900452 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-900452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-900452 -n newest-cni-900452
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-900452 -n newest-cni-900452: exit status 2 (288.336257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-900452 -n newest-cni-900452
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-900452 -n newest-cni-900452: exit status 2 (278.232297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-900452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-900452 -n newest-cni-900452
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-900452 -n newest-cni-900452
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0909 11:37:03.279965   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m7.18716289s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wmltp" [0d1c0ba1-bf29-487d-ac00-8c3cf3b0a940] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003638275s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wmltp" [0d1c0ba1-bf29-487d-ac00-8c3cf3b0a940] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00352686s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-424368 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-424368 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-424368 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-424368 -n no-preload-424368
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-424368 -n no-preload-424368: exit status 2 (286.48054ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-424368 -n no-preload-424368
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-424368 -n no-preload-424368: exit status 2 (280.175342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-424368 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-424368 -n no-preload-424368
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-424368 -n no-preload-424368
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0909 11:37:25.072246   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.078635   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.090886   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.112977   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.154360   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.235823   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.397072   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:25.719161   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:26.361473   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:27.642762   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:30.204771   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:30.981568   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/skaffold-591364/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:35.326405   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.065861475s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k5k7x" [9eb12b52-e901-4d3e-b7f0-976e33147b79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0909 11:37:44.831970   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/addons-271785/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:45.568287   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/no-preload-424368/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-k5k7x" [9eb12b52-e901-4d3e-b7f0-976e33147b79] Running
E0909 11:37:50.907229   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:50.913579   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:50.924885   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:50.946209   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:50.987598   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:51.069052   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:51.231012   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:51.552252   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
E0909 11:37:52.194178   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003250746s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (35.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0909 11:38:11.402643   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (35.740683099s)
--- PASS: TestNetworkPlugins/group/calico/Start (35.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q9llh" [a9cf89e1-fd77-4351-8a0a-09743e7dd02d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004200669s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jm99b" [dd8d7ef8-d9c6-4a50-81fd-8008591a18c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jm99b" [dd8d7ef8-d9c6-4a50-81fd-8008591a18c9] Running
E0909 11:38:31.884954   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003971672s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (19.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vgh46" [524bc280-8dc5-42cd-842b-73b3f4e60467] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-vgh46" [524bc280-8dc5-42cd-842b-73b3f4e60467] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-vgh46" [524bc280-8dc5-42cd-842b-73b3f4e60467] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-vgh46" [524bc280-8dc5-42cd-842b-73b3f4e60467] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0909 11:39:02.406333   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "calico-node-vgh46" [524bc280-8dc5-42cd-842b-73b3f4e60467] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 19.005376583s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (19.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (43.647364089s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dkvpr" [b88c4974-aa1d-4e02-bb87-c416161d2c99] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003613543s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2pc4k" [acd7e047-0fca-4b95-8203-0ec47ab12ac0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2pc4k" [acd7e047-0fca-4b95-8203-0ec47ab12ac0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004637377s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dkvpr" [b88c4974-aa1d-4e02-bb87-c416161d2c99] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00343378s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-386910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386910 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-386910 --alsologtostderr -v=1
E0909 11:39:12.847207   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386910 -n embed-certs-386910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386910 -n embed-certs-386910: exit status 2 (327.352647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386910 -n embed-certs-386910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386910 -n embed-certs-386910: exit status 2 (299.503093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-386910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386910 -n embed-certs-386910
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386910 -n embed-certs-386910
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (71.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m11.380943113s)
--- PASS: TestNetworkPlugins/group/false/Start (71.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m4.845808768s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwrt6" [42fa6a4c-f98b-49d7-852f-1b3319b47b1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pwrt6" [42fa6a4c-f98b-49d7-852f-1b3319b47b1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004073222s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (43.742941436s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nqqn8" [db0395c9-ef9c-44a6-8753-49ba9027d197] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004152069s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hqbqs" [d0868c69-1cfb-48de-a642-4c92fdba5bd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hqbqs" [d0868c69-1cfb-48de-a642-4c92fdba5bd7] Running
E0909 11:40:34.769322   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/old-k8s-version-394892/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.00410873s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nqqn8" [db0395c9-ef9c-44a6-8753-49ba9027d197] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003646367s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-352066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-352066 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-352066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066: exit status 2 (282.24284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066: exit status 2 (288.818611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-352066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-352066 -n default-k8s-diff-port-352066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m5.508896088s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gslkv" [befa2412-6e0a-4b80-bcec-74ec2e7d9588] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gslkv" [befa2412-6e0a-4b80-bcec-74ec2e7d9588] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004048906s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6xcn8" [f9c5b8ac-9fdc-4add-8089-68284883ca84] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004307271s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0909 11:40:59.341014   15404 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/functional-598740/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-297611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (40.345878809s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tqjq4" [3076836b-bec8-4e8b-8498-c506ed383d07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tqjq4" [3076836b-bec8-4e8b-8498-c506ed383d07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003138543s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gm2x5" [a72af73b-520a-45da-9a70-0526538057eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gm2x5" [a72af73b-520a-45da-9a70-0526538057eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004013441s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-297611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-297611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2422m" [99d9f021-73ef-48ab-9fdb-61c7f9fa1c7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2422m" [99d9f021-73ef-48ab-9fdb-61c7f9fa1c7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004115619s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-297611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-297611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-897068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-897068
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-297611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-297611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Sep 2024 11:29:23 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-474538
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19584-8635/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Sep 2024 11:30:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-990499
contexts:
- context:
cluster: missing-upgrade-474538
extensions:
- extension:
last-update: Mon, 09 Sep 2024 11:29:23 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-474538
name: missing-upgrade-474538
- context:
cluster: stopped-upgrade-990499
extensions:
- extension:
last-update: Mon, 09 Sep 2024 11:30:00 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: stopped-upgrade-990499
name: stopped-upgrade-990499
current-context: stopped-upgrade-990499
kind: Config
preferences: {}
users:
- name: missing-upgrade-474538
user:
client-certificate: /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/missing-upgrade-474538/client.crt
client-key: /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/missing-upgrade-474538/client.key
- name: stopped-upgrade-990499
user:
client-certificate: /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/stopped-upgrade-990499/client.crt
client-key: /home/jenkins/minikube-integration/19584-8635/.minikube/profiles/stopped-upgrade-990499/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-297611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-297611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297611"

                                                
                                                
----------------------- debugLogs end: cilium-297611 [took: 3.665270573s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-297611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-297611
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
Copied to clipboard