Test Report: Docker_Linux 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-14:36202
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.56
x
+
TestAddons/parallel/Registry (72.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.648873ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-gfmv4" [0f09071c-4485-4e48-a170-d531d56fd35c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003044805s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-chxww" [f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003611961s
addons_test.go:342: (dbg) Run:  kubectl --context addons-794116 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-794116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-794116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07272278s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-794116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 ip
2024/09/13 23:40:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-794116
helpers_test.go:235: (dbg) docker inspect addons-794116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb",
	        "Created": "2024-09-13T23:27:15.232503875Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-13T23:27:15.364981776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fb4c54d38bc255b13af7cf88ad73ea4984e36b44c4e3dc7bb7254a48e49cb35f",
	        "ResolvConfPath": "/var/lib/docker/containers/e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb/hosts",
	        "LogPath": "/var/lib/docker/containers/e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb/e955bd978692d6a4122ed147d28e91050245e5024293afcac45d7ea94aff0dcb-json.log",
	        "Name": "/addons-794116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-794116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-794116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/760c18ec3e86aadcac815ba3292cd1bffcdeff3767fe37c3e462fa0ba7a7a7dc-init/diff:/var/lib/docker/overlay2/965d0374b84c91cf7c53733bff1ad21be77fb6940113f2e44415f380d7ed4fe1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/760c18ec3e86aadcac815ba3292cd1bffcdeff3767fe37c3e462fa0ba7a7a7dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/760c18ec3e86aadcac815ba3292cd1bffcdeff3767fe37c3e462fa0ba7a7a7dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/760c18ec3e86aadcac815ba3292cd1bffcdeff3767fe37c3e462fa0ba7a7a7dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-794116",
	                "Source": "/var/lib/docker/volumes/addons-794116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-794116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-794116",
	                "name.minikube.sigs.k8s.io": "addons-794116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae88d3a24157e6cfffcb5c988df56e90677a5592ff7202c51c9ac432227405ed",
	            "SandboxKey": "/var/run/docker/netns/ae88d3a24157",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-794116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "301df9a7e900579030cce613e7ca165a339a5fbabd0532d3f2fa268497a4bb2b",
	                    "EndpointID": "f9e31ad20468a84967f378fae1a9bc732829c7cf9eb446e396008504058cdc77",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-794116",
	                        "e955bd978692"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-794116 -n addons-794116
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-398633                                                                   | download-docker-398633 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-717802   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | binary-mirror-717802                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32877                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-717802                                                                     | binary-mirror-717802   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| addons  | enable dashboard -p                                                                         | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | addons-794116                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | addons-794116                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-794116 --wait=true                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:30 UTC | 13 Sep 24 23:31 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | -p addons-794116                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-794116 ssh cat                                                                       | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | /opt/local-path-provisioner/pvc-6b1450aa-8425-40e9-a121-ba8dd1de215e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | addons-794116                                                                               |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | -p addons-794116                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-794116 addons                                                                        | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794116 addons                                                                        | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794116 addons                                                                        | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:40 UTC |
	|         | addons-794116                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-794116 ssh curl -s                                                                   | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-794116 ip                                                                            | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-794116 ip                                                                            | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	| addons  | addons-794116 addons disable                                                                | addons-794116          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:53
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:53.726981   13384 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:53.727232   13384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:53.727242   13384 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:53.727248   13384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:53.727458   13384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:26:53.728048   13384 out.go:352] Setting JSON to false
	I0913 23:26:53.728857   13384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":560,"bootTime":1726269454,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:53.728953   13384 start.go:139] virtualization: kvm guest
	I0913 23:26:53.730998   13384 out.go:177] * [addons-794116] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:26:53.732290   13384 notify.go:220] Checking for updates...
	I0913 23:26:53.732298   13384 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:26:53.733637   13384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:53.734934   13384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:26:53.736190   13384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	I0913 23:26:53.737400   13384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:26:53.738817   13384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:26:53.740211   13384 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:53.761137   13384 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:53.761223   13384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:53.804051   13384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 23:26:53.795490691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:53.804153   13384 docker.go:318] overlay module found
	I0913 23:26:53.805907   13384 out.go:177] * Using the docker driver based on user configuration
	I0913 23:26:53.807447   13384 start.go:297] selected driver: docker
	I0913 23:26:53.807458   13384 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:53.807468   13384 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:26:53.808243   13384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:53.859551   13384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 23:26:53.850864418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:53.859711   13384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:53.859922   13384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:26:53.861547   13384 out.go:177] * Using Docker driver with root privileges
	I0913 23:26:53.862924   13384 cni.go:84] Creating CNI manager for ""
	I0913 23:26:53.862981   13384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:26:53.862992   13384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:26:53.863049   13384 start.go:340] cluster config:
	{Name:addons-794116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-794116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:53.864301   13384 out.go:177] * Starting "addons-794116" primary control-plane node in "addons-794116" cluster
	I0913 23:26:53.865489   13384 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:53.866869   13384 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:53.868189   13384 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:53.868210   13384 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:53.868229   13384 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0913 23:26:53.868241   13384 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:53.868316   13384 preload.go:172] Found /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0913 23:26:53.868330   13384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 23:26:53.868687   13384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/config.json ...
	I0913 23:26:53.868710   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/config.json: {Name:mk639203eb5af59b52e495976b1b496dac05ebda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:26:53.884849   13384 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:53.884962   13384 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:53.884983   13384 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0913 23:26:53.884989   13384 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0913 23:26:53.884996   13384 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0913 23:26:53.885003   13384 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0913 23:27:05.885128   13384 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0913 23:27:05.885173   13384 cache.go:194] Successfully downloaded all kic artifacts
	I0913 23:27:05.885219   13384 start.go:360] acquireMachinesLock for addons-794116: {Name:mke5a55736d17aa9d07bacd9d88d2045e8a8297c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:05.885323   13384 start.go:364] duration metric: took 81.049µs to acquireMachinesLock for "addons-794116"
	I0913 23:27:05.885352   13384 start.go:93] Provisioning new machine with config: &{Name:addons-794116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-794116 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:27:05.885439   13384 start.go:125] createHost starting for "" (driver="docker")
	I0913 23:27:05.887256   13384 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0913 23:27:05.887520   13384 start.go:159] libmachine.API.Create for "addons-794116" (driver="docker")
	I0913 23:27:05.887552   13384 client.go:168] LocalClient.Create starting
	I0913 23:27:05.887655   13384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem
	I0913 23:27:05.939845   13384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/cert.pem
	I0913 23:27:06.263210   13384 cli_runner.go:164] Run: docker network inspect addons-794116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0913 23:27:06.279491   13384 cli_runner.go:211] docker network inspect addons-794116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0913 23:27:06.279551   13384 network_create.go:284] running [docker network inspect addons-794116] to gather additional debugging logs...
	I0913 23:27:06.279571   13384 cli_runner.go:164] Run: docker network inspect addons-794116
	W0913 23:27:06.293810   13384 cli_runner.go:211] docker network inspect addons-794116 returned with exit code 1
	I0913 23:27:06.293849   13384 network_create.go:287] error running [docker network inspect addons-794116]: docker network inspect addons-794116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-794116 not found
	I0913 23:27:06.293866   13384 network_create.go:289] output of [docker network inspect addons-794116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-794116 not found
	
	** /stderr **
	I0913 23:27:06.293965   13384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 23:27:06.309297   13384 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015548e0}
	I0913 23:27:06.309350   13384 network_create.go:124] attempt to create docker network addons-794116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0913 23:27:06.309392   13384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-794116 addons-794116
	I0913 23:27:06.368396   13384 network_create.go:108] docker network addons-794116 192.168.49.0/24 created
	I0913 23:27:06.368429   13384 kic.go:121] calculated static IP "192.168.49.2" for the "addons-794116" container
	I0913 23:27:06.368487   13384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0913 23:27:06.383198   13384 cli_runner.go:164] Run: docker volume create addons-794116 --label name.minikube.sigs.k8s.io=addons-794116 --label created_by.minikube.sigs.k8s.io=true
	I0913 23:27:06.399970   13384 oci.go:103] Successfully created a docker volume addons-794116
	I0913 23:27:06.400053   13384 cli_runner.go:164] Run: docker run --rm --name addons-794116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794116 --entrypoint /usr/bin/test -v addons-794116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0913 23:27:11.320918   13384 cli_runner.go:217] Completed: docker run --rm --name addons-794116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794116 --entrypoint /usr/bin/test -v addons-794116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (4.920828431s)
	I0913 23:27:11.320951   13384 oci.go:107] Successfully prepared a docker volume addons-794116
	I0913 23:27:11.320979   13384 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:27:11.320998   13384 kic.go:194] Starting extracting preloaded images to volume ...
	I0913 23:27:11.321069   13384 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-794116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0913 23:27:15.170500   13384 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-794116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (3.849374666s)
	I0913 23:27:15.170529   13384 kic.go:203] duration metric: took 3.84952911s to extract preloaded images to volume ...
	W0913 23:27:15.170643   13384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0913 23:27:15.170740   13384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0913 23:27:15.217785   13384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-794116 --name addons-794116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-794116 --network addons-794116 --ip 192.168.49.2 --volume addons-794116:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0913 23:27:15.523536   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Running}}
	I0913 23:27:15.541194   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:15.560264   13384 cli_runner.go:164] Run: docker exec addons-794116 stat /var/lib/dpkg/alternatives/iptables
	I0913 23:27:15.601970   13384 oci.go:144] the created container "addons-794116" has a running status.
	I0913 23:27:15.602006   13384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa...
	I0913 23:27:15.653388   13384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0913 23:27:15.673214   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:15.690289   13384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0913 23:27:15.690316   13384 kic_runner.go:114] Args: [docker exec --privileged addons-794116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0913 23:27:15.729855   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:15.747564   13384 machine.go:93] provisionDockerMachine start ...
	I0913 23:27:15.747658   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:15.766955   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:15.767220   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:15.767239   13384 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 23:27:15.768073   13384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35238->127.0.0.1:32768: read: connection reset by peer
	I0913 23:27:18.896801   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-794116
	
	I0913 23:27:18.896838   13384 ubuntu.go:169] provisioning hostname "addons-794116"
	I0913 23:27:18.896912   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:18.913770   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:18.913980   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:18.913994   13384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-794116 && echo "addons-794116" | sudo tee /etc/hostname
	I0913 23:27:19.051516   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-794116
	
	I0913 23:27:19.051593   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:19.067857   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:19.068066   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:19.068084   13384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-794116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-794116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-794116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:27:19.197346   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:19.197371   13384 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5233/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5233/.minikube}
	I0913 23:27:19.197407   13384 ubuntu.go:177] setting up certificates
	I0913 23:27:19.197422   13384 provision.go:84] configureAuth start
	I0913 23:27:19.197468   13384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794116
	I0913 23:27:19.213908   13384 provision.go:143] copyHostCerts
	I0913 23:27:19.213981   13384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5233/.minikube/ca.pem (1078 bytes)
	I0913 23:27:19.214130   13384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5233/.minikube/cert.pem (1123 bytes)
	I0913 23:27:19.214224   13384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5233/.minikube/key.pem (1679 bytes)
	I0913 23:27:19.214308   13384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca-key.pem org=jenkins.addons-794116 san=[127.0.0.1 192.168.49.2 addons-794116 localhost minikube]
	I0913 23:27:19.287011   13384 provision.go:177] copyRemoteCerts
	I0913 23:27:19.287075   13384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:27:19.287106   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:19.303863   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:19.397677   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:27:19.418967   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:27:19.440259   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 23:27:19.461421   13384 provision.go:87] duration metric: took 263.987318ms to configureAuth
	I0913 23:27:19.461453   13384 ubuntu.go:193] setting minikube options for container-runtime
	I0913 23:27:19.461633   13384 config.go:182] Loaded profile config "addons-794116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:19.461678   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:19.478515   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:19.478681   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:19.478693   13384 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 23:27:19.609741   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0913 23:27:19.609771   13384 ubuntu.go:71] root file system type: overlay
	I0913 23:27:19.609908   13384 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 23:27:19.609988   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:19.626734   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:19.626899   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:19.626956   13384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 23:27:19.767736   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 23:27:19.767807   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:19.784081   13384 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:19.784247   13384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:19.784263   13384 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 23:27:20.475845   13384 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-13 23:27:19.765088416 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0913 23:27:20.475879   13384 machine.go:96] duration metric: took 4.72829498s to provisionDockerMachine
	I0913 23:27:20.475895   13384 client.go:171] duration metric: took 14.588332244s to LocalClient.Create
	I0913 23:27:20.475917   13384 start.go:167] duration metric: took 14.588395958s to libmachine.API.Create "addons-794116"
	I0913 23:27:20.475928   13384 start.go:293] postStartSetup for "addons-794116" (driver="docker")
	I0913 23:27:20.475940   13384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:20.476011   13384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:20.476057   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:20.492031   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:20.585971   13384 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:27:20.588983   13384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 23:27:20.589011   13384 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 23:27:20.589019   13384 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 23:27:20.589025   13384 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0913 23:27:20.589034   13384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5233/.minikube/addons for local assets ...
	I0913 23:27:20.589091   13384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5233/.minikube/files for local assets ...
	I0913 23:27:20.589113   13384 start.go:296] duration metric: took 113.179341ms for postStartSetup
	I0913 23:27:20.589379   13384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794116
	I0913 23:27:20.605951   13384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/config.json ...
	I0913 23:27:20.606202   13384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:27:20.606241   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:20.623077   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:20.714019   13384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0913 23:27:20.718186   13384 start.go:128] duration metric: took 14.832732017s to createHost
	I0913 23:27:20.718212   13384 start.go:83] releasing machines lock for "addons-794116", held for 14.83287507s
	I0913 23:27:20.718277   13384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794116
	I0913 23:27:20.734937   13384 ssh_runner.go:195] Run: cat /version.json
	I0913 23:27:20.734984   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:20.735058   13384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:27:20.735138   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:20.752329   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:20.752687   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:20.910974   13384 ssh_runner.go:195] Run: systemctl --version
	I0913 23:27:20.914928   13384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 23:27:20.918622   13384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0913 23:27:20.940259   13384 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0913 23:27:20.940333   13384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:20.965288   13384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0913 23:27:20.965326   13384 start.go:495] detecting cgroup driver to use...
	I0913 23:27:20.965365   13384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:20.965470   13384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:20.980145   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 23:27:20.988911   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 23:27:20.997751   13384 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 23:27:20.997804   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 23:27:21.006649   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:21.015976   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 23:27:21.025398   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:21.034988   13384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:21.043613   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 23:27:21.053318   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 23:27:21.062436   13384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 23:27:21.071858   13384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:21.079664   13384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:21.087009   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:21.157768   13384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 23:27:21.232255   13384 start.go:495] detecting cgroup driver to use...
	I0913 23:27:21.232306   13384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:21.232372   13384 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 23:27:21.242778   13384 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0913 23:27:21.242846   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 23:27:21.253609   13384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:21.268991   13384 ssh_runner.go:195] Run: which cri-dockerd
	I0913 23:27:21.272594   13384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 23:27:21.281638   13384 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 23:27:21.298266   13384 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 23:27:21.380455   13384 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 23:27:21.477168   13384 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 23:27:21.477293   13384 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 23:27:21.493785   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:21.566157   13384 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 23:27:21.827478   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 23:27:21.838253   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:21.848701   13384 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 23:27:21.929914   13384 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 23:27:22.011171   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:22.082536   13384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 23:27:22.094402   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:22.103750   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:22.182412   13384 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 23:27:22.240136   13384 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 23:27:22.240219   13384 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 23:27:22.243453   13384 start.go:563] Will wait 60s for crictl version
	I0913 23:27:22.243497   13384 ssh_runner.go:195] Run: which crictl
	I0913 23:27:22.246516   13384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:27:22.276026   13384 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 23:27:22.276077   13384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 23:27:22.298349   13384 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 23:27:22.326113   13384 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 23:27:22.326198   13384 cli_runner.go:164] Run: docker network inspect addons-794116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 23:27:22.342014   13384 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:22.345341   13384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:22.355146   13384 kubeadm.go:883] updating cluster {Name:addons-794116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-794116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:22.355257   13384 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:27:22.355303   13384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 23:27:22.372959   13384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 23:27:22.372983   13384 docker.go:615] Images already preloaded, skipping extraction
	I0913 23:27:22.373080   13384 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 23:27:22.390892   13384 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 23:27:22.390917   13384 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:27:22.390926   13384 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0913 23:27:22.391012   13384 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-794116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-794116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:27:22.391066   13384 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 23:27:22.433250   13384 cni.go:84] Creating CNI manager for ""
	I0913 23:27:22.433277   13384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:22.433289   13384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:22.433307   13384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-794116 NodeName:addons-794116 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:22.433439   13384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-794116"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:22.433493   13384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:22.441548   13384 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:27:22.441609   13384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:22.449605   13384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:27:22.465219   13384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:22.480521   13384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0913 23:27:22.495574   13384 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:22.498604   13384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:22.508259   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:22.588726   13384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:22.601249   13384 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116 for IP: 192.168.49.2
	I0913 23:27:22.601275   13384 certs.go:194] generating shared ca certs ...
	I0913 23:27:22.601294   13384 certs.go:226] acquiring lock for ca certs: {Name:mke84d4959018a2308ed4ab133eed0abe61c0e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:22.601439   13384 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5233/.minikube/ca.key
	I0913 23:27:22.847587   13384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt ...
	I0913 23:27:22.847615   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt: {Name:mkfd888702ea3ff1d9845998234205c98043829a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:22.847782   13384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5233/.minikube/ca.key ...
	I0913 23:27:22.847791   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/ca.key: {Name:mk68122cad8a72e7a620733cdeef78e7f4ad7eb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:22.847879   13384 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.key
	I0913 23:27:23.026505   13384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.crt ...
	I0913 23:27:23.026537   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.crt: {Name:mkea2dc0d53867b9003dd3ee6f4370cffc2c0bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.026770   13384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.key ...
	I0913 23:27:23.026788   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.key: {Name:mk1b67a039c21c0894eaa296b050c5d2e9cf4f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.026887   13384 certs.go:256] generating profile certs ...
	I0913 23:27:23.026952   13384 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.key
	I0913 23:27:23.026966   13384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt with IP's: []
	I0913 23:27:23.198702   13384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt ...
	I0913 23:27:23.198735   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: {Name:mkcc95ffc356336551f33265759ace6bd5e292c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.198926   13384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.key ...
	I0913 23:27:23.198947   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.key: {Name:mk61d593593c22b9e0fd5dceaba0532ecdcb8a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.199044   13384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key.af2e173e
	I0913 23:27:23.199064   13384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt.af2e173e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0913 23:27:23.311294   13384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt.af2e173e ...
	I0913 23:27:23.311325   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt.af2e173e: {Name:mkf030d10a0b74029d10abbb4b3a53378c1e7366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.311515   13384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key.af2e173e ...
	I0913 23:27:23.311532   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key.af2e173e: {Name:mk5660e3df7a83e71fa37fca19419614217022a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.311631   13384 certs.go:381] copying /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt.af2e173e -> /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt
	I0913 23:27:23.311710   13384 certs.go:385] copying /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key.af2e173e -> /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key
	I0913 23:27:23.311755   13384 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.key
	I0913 23:27:23.311773   13384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.crt with IP's: []
	I0913 23:27:23.403683   13384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.crt ...
	I0913 23:27:23.403718   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.crt: {Name:mk610cecd9dd11b897930bb8066a44ecf1c48697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.403909   13384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.key ...
	I0913 23:27:23.403925   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.key: {Name:mk67bff6695e137dd8dd76db8ade74913b6f2821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:23.404169   13384 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:27:23.404208   13384 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:27:23.404233   13384 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:23.404255   13384 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5233/.minikube/certs/key.pem (1679 bytes)
	I0913 23:27:23.404855   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:23.425907   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:27:23.446762   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:23.467784   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:27:23.488580   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 23:27:23.509287   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:27:23.529859   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:23.550861   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:27:23.572098   13384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:23.593427   13384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:23.609350   13384 ssh_runner.go:195] Run: openssl version
	I0913 23:27:23.614459   13384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:23.623276   13384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:23.626455   13384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:23.626509   13384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:23.633020   13384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:23.641713   13384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:23.644815   13384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:23.644857   13384 kubeadm.go:392] StartCluster: {Name:addons-794116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-794116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:23.644947   13384 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 23:27:23.662201   13384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:23.670385   13384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:23.678215   13384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0913 23:27:23.678276   13384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:23.685945   13384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:23.685962   13384 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:23.685999   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:23.693373   13384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:23.693435   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:23.700634   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:23.708295   13384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:23.708343   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:23.715813   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:23.723535   13384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:23.723586   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:23.731093   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:23.738780   13384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:23.738838   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:23.746034   13384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0913 23:27:23.778088   13384 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:23.778174   13384 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:23.796070   13384 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0913 23:27:23.796156   13384 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0913 23:27:23.796242   13384 kubeadm.go:310] OS: Linux
	I0913 23:27:23.796329   13384 kubeadm.go:310] CGROUPS_CPU: enabled
	I0913 23:27:23.796404   13384 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0913 23:27:23.796468   13384 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0913 23:27:23.796530   13384 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0913 23:27:23.796577   13384 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0913 23:27:23.796624   13384 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0913 23:27:23.796697   13384 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0913 23:27:23.796768   13384 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0913 23:27:23.796836   13384 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0913 23:27:23.842856   13384 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:23.842987   13384 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:23.843131   13384 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:23.853500   13384 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:23.855516   13384 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:23.855629   13384 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:23.855721   13384 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:24.028521   13384 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:24.342607   13384 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:24.579627   13384 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:24.679051   13384 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:24.806335   13384 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:24.806480   13384 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-794116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 23:27:24.982516   13384 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:24.982646   13384 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-794116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 23:27:25.054576   13384 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:25.382562   13384 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:25.633223   13384 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:25.633295   13384 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:25.772529   13384 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:25.872088   13384 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:26.022205   13384 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:26.122658   13384 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:26.465384   13384 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:26.465844   13384 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:26.468087   13384 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:26.471208   13384 out.go:235]   - Booting up control plane ...
	I0913 23:27:26.471343   13384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:26.471414   13384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:26.471520   13384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:26.479394   13384 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:26.484293   13384 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:26.484379   13384 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:26.564911   13384 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:26.565062   13384 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:27.066352   13384 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.560771ms
	I0913 23:27:27.066470   13384 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:31.568020   13384 kubeadm.go:310] [api-check] The API server is healthy after 4.501604244s
	I0913 23:27:31.578559   13384 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:31.590024   13384 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:31.607458   13384 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:31.607804   13384 kubeadm.go:310] [mark-control-plane] Marking the node addons-794116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:31.615917   13384 kubeadm.go:310] [bootstrap-token] Using token: d1ywyo.k8bjzmbvapa273ho
	I0913 23:27:31.617689   13384 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:31.617845   13384 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:31.621680   13384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:31.627229   13384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:31.629969   13384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:31.632668   13384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:31.635054   13384 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:31.974293   13384 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:32.394914   13384 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:32.973909   13384 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:32.974766   13384 kubeadm.go:310] 
	I0913 23:27:32.974858   13384 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:32.974869   13384 kubeadm.go:310] 
	I0913 23:27:32.974972   13384 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:32.974985   13384 kubeadm.go:310] 
	I0913 23:27:32.975021   13384 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:32.975116   13384 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:32.975192   13384 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:32.975201   13384 kubeadm.go:310] 
	I0913 23:27:32.975269   13384 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:32.975284   13384 kubeadm.go:310] 
	I0913 23:27:32.975325   13384 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:32.975331   13384 kubeadm.go:310] 
	I0913 23:27:32.975407   13384 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:32.975520   13384 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:32.975624   13384 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:32.975650   13384 kubeadm.go:310] 
	I0913 23:27:32.975764   13384 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:32.975848   13384 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:32.975858   13384 kubeadm.go:310] 
	I0913 23:27:32.975955   13384 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d1ywyo.k8bjzmbvapa273ho \
	I0913 23:27:32.976081   13384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9db4175747bb737d8a39c512bf3352efde389e43962c19157f0336ad176c42b0 \
	I0913 23:27:32.976108   13384 kubeadm.go:310] 	--control-plane 
	I0913 23:27:32.976116   13384 kubeadm.go:310] 
	I0913 23:27:32.976207   13384 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:32.976215   13384 kubeadm.go:310] 
	I0913 23:27:32.976305   13384 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d1ywyo.k8bjzmbvapa273ho \
	I0913 23:27:32.976452   13384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9db4175747bb737d8a39c512bf3352efde389e43962c19157f0336ad176c42b0 
	I0913 23:27:32.978404   13384 kubeadm.go:310] W0913 23:27:23.775723    1923 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:32.978665   13384 kubeadm.go:310] W0913 23:27:23.776297    1923 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:32.978867   13384 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0913 23:27:32.978977   13384 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:32.979007   13384 cni.go:84] Creating CNI manager for ""
	I0913 23:27:32.979026   13384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:32.981091   13384 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:32.982485   13384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:32.990751   13384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:33.007493   13384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:33.007557   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:33.007566   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-794116 minikube.k8s.io/updated_at=2024_09_13T23_27_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-794116 minikube.k8s.io/primary=true
	I0913 23:27:33.092949   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:33.092960   13384 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:33.593818   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:34.093672   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:34.593840   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:35.093910   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:35.593161   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:36.093489   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:36.594038   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:37.093615   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:37.593073   13384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:37.656093   13384 kubeadm.go:1113] duration metric: took 4.648598468s to wait for elevateKubeSystemPrivileges
	I0913 23:27:37.656128   13384 kubeadm.go:394] duration metric: took 14.011273786s to StartCluster
	I0913 23:27:37.656150   13384 settings.go:142] acquiring lock: {Name:mka91989d6d7a65515c533a78dd563f87912137c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:37.656272   13384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:27:37.656789   13384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/kubeconfig: {Name:mkc8aab5a1909b3591993f223a6ef363a51b3cc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:37.657026   13384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:27:37.657049   13384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:27:37.657090   13384 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:27:37.657206   13384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-794116"
	I0913 23:27:37.657211   13384 addons.go:69] Setting yakd=true in profile "addons-794116"
	I0913 23:27:37.657227   13384 addons.go:234] Setting addon yakd=true in "addons-794116"
	I0913 23:27:37.657247   13384 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-794116"
	I0913 23:27:37.657260   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657249   13384 addons.go:69] Setting metrics-server=true in profile "addons-794116"
	I0913 23:27:37.657268   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657279   13384 addons.go:234] Setting addon metrics-server=true in "addons-794116"
	I0913 23:27:37.657280   13384 config.go:182] Loaded profile config "addons-794116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:37.657316   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657320   13384 addons.go:69] Setting cloud-spanner=true in profile "addons-794116"
	I0913 23:27:37.657319   13384 addons.go:69] Setting storage-provisioner=true in profile "addons-794116"
	I0913 23:27:37.657345   13384 addons.go:69] Setting helm-tiller=true in profile "addons-794116"
	I0913 23:27:37.657356   13384 addons.go:234] Setting addon helm-tiller=true in "addons-794116"
	I0913 23:27:37.657383   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657386   13384 addons.go:234] Setting addon storage-provisioner=true in "addons-794116"
	I0913 23:27:37.657396   13384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-794116"
	I0913 23:27:37.657425   13384 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-794116"
	I0913 23:27:37.657426   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657821   13384 addons.go:234] Setting addon cloud-spanner=true in "addons-794116"
	I0913 23:27:37.657892   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.657984   13384 addons.go:69] Setting registry=true in profile "addons-794116"
	I0913 23:27:37.657998   13384 addons.go:69] Setting ingress=true in profile "addons-794116"
	I0913 23:27:37.658013   13384 addons.go:234] Setting addon registry=true in "addons-794116"
	I0913 23:27:37.658025   13384 addons.go:234] Setting addon ingress=true in "addons-794116"
	I0913 23:27:37.658037   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.658076   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.658090   13384 addons.go:69] Setting ingress-dns=true in profile "addons-794116"
	I0913 23:27:37.658119   13384 addons.go:234] Setting addon ingress-dns=true in "addons-794116"
	I0913 23:27:37.658163   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.658184   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.658436   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.658575   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.658587   13384 addons.go:69] Setting default-storageclass=true in profile "addons-794116"
	I0913 23:27:37.658604   13384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-794116"
	I0913 23:27:37.658739   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.657994   13384 addons.go:69] Setting inspektor-gadget=true in profile "addons-794116"
	I0913 23:27:37.658901   13384 addons.go:234] Setting addon inspektor-gadget=true in "addons-794116"
	I0913 23:27:37.658923   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.658746   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.659330   13384 addons.go:69] Setting volcano=true in profile "addons-794116"
	I0913 23:27:37.659353   13384 addons.go:234] Setting addon volcano=true in "addons-794116"
	I0913 23:27:37.659426   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.659475   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.659594   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.658929   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.660232   13384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-794116"
	I0913 23:27:37.660273   13384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-794116"
	I0913 23:27:37.660698   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.660754   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.661895   13384 out.go:177] * Verifying Kubernetes components...
	I0913 23:27:37.661912   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.662130   13384 addons.go:69] Setting volumesnapshots=true in profile "addons-794116"
	I0913 23:27:37.662156   13384 addons.go:234] Setting addon volumesnapshots=true in "addons-794116"
	I0913 23:27:37.662192   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.662265   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.662943   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.658576   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.664362   13384 addons.go:69] Setting gcp-auth=true in profile "addons-794116"
	I0913 23:27:37.664389   13384 mustload.go:65] Loading cluster: addons-794116
	I0913 23:27:37.670547   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.674460   13384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:37.689107   13384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:37.689933   13384 config.go:182] Loaded profile config "addons-794116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:37.690186   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.690295   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.691421   13384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 23:27:37.692775   13384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:37.694573   13384 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:27:37.694601   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 23:27:37.694664   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.722944   13384 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:27:37.724653   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:27:37.724842   13384 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:37.724855   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:27:37.724912   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.725077   13384 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:27:37.725105   13384 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:27:37.725852   13384 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:27:37.726704   13384 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:27:37.732743   13384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:27:37.732803   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.726761   13384 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:27:37.732866   13384 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:27:37.732927   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.732608   13384 addons.go:234] Setting addon default-storageclass=true in "addons-794116"
	I0913 23:27:37.733905   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.734209   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:27:37.734388   13384 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:37.734403   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:27:37.734459   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.734994   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.735546   13384 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 23:27:37.736435   13384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:27:37.737453   13384 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:27:37.737475   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 23:27:37.737550   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.737959   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:27:37.738324   13384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:37.738339   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:27:37.738385   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.739379   13384 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:27:37.740782   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:27:37.740892   13384 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:27:37.740906   13384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:27:37.740983   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.741379   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.742869   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.743317   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:27:37.745101   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:27:37.746751   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:27:37.748094   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:27:37.749322   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:27:37.749345   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:27:37.749425   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.768019   13384 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-794116"
	I0913 23:27:37.768080   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:37.768589   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:37.774508   13384 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 23:27:37.776116   13384 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:27:37.778055   13384 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:27:37.778076   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:27:37.778136   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.785697   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.787743   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.790276   13384 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0913 23:27:37.791620   13384 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:27:37.791642   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0913 23:27:37.791698   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.796894   13384 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 23:27:37.798635   13384 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 23:27:37.799803   13384 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:27:37.801047   13384 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 23:27:37.801050   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:27:37.801118   13384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:27:37.801189   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.803536   13384 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:37.803558   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 23:27:37.803633   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.817699   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.817772   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.819268   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.819537   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.822010   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.823296   13384 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:37.823314   13384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:27:37.823407   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.823431   13384 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:27:37.824729   13384 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:27:37.826218   13384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:37.826243   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:27:37.826301   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:37.828746   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.828953   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.836478   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.837330   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.838050   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.850558   13384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:27:37.851143   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	W0913 23:27:37.854567   13384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0913 23:27:37.854595   13384 retry.go:31] will retry after 256.347169ms: ssh: handshake failed: EOF
	I0913 23:27:37.867783   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:37.869852   13384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:38.072337   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:38.149990   13384 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:27:38.150073   13384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:27:38.247457   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:27:38.248762   13384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:27:38.248786   13384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:27:38.252094   13384 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:27:38.252167   13384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:27:38.343785   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:38.345035   13384 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:27:38.345061   13384 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:27:38.346049   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:38.362969   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:27:38.445629   13384 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:27:38.445722   13384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:27:38.454280   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:38.543274   13384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:27:38.543368   13384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:27:38.551224   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:38.556042   13384 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:27:38.556070   13384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:27:38.560508   13384 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:27:38.560530   13384 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0913 23:27:38.560611   13384 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:27:38.560625   13384 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:27:38.658598   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:38.743596   13384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:27:38.743906   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:27:38.862842   13384 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:38.862930   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:27:38.943055   13384 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:38.943127   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:27:38.950084   13384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:27:38.950158   13384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:27:38.960211   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:27:38.960240   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:27:39.155190   13384 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:27:39.155269   13384 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0913 23:27:39.167354   13384 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:27:39.167444   13384 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:27:39.250588   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:39.262411   13384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:27:39.262512   13384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:27:39.344017   13384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.474095855s)
	I0913 23:27:39.345075   13384 node_ready.go:35] waiting up to 6m0s for node "addons-794116" to be "Ready" ...
	I0913 23:27:39.345350   13384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.493299023s)
	I0913 23:27:39.345380   13384 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0913 23:27:39.356593   13384 node_ready.go:49] node "addons-794116" has status "Ready":"True"
	I0913 23:27:39.356624   13384 node_ready.go:38] duration metric: took 11.504372ms for node "addons-794116" to be "Ready" ...
	I0913 23:27:39.356635   13384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:27:39.357410   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:39.369751   13384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace to be "Ready" ...
	I0913 23:27:39.746203   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:27:39.746238   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:27:39.746915   13384 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:27:39.746981   13384 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:27:39.756206   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:27:39.756233   13384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:27:39.848351   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.775979301s)
	I0913 23:27:39.850257   13384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-794116" context rescaled to 1 replicas
	I0913 23:27:40.044754   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:27:40.052732   13384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:40.052816   13384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:27:40.258859   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:27:40.258954   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:27:40.351218   13384 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:40.351249   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:27:40.353274   13384 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:27:40.353298   13384 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:27:40.653150   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:40.746269   13384 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:27:40.746362   13384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:27:41.042272   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:41.055140   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:27:41.055237   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:27:41.447335   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:41.742437   13384 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:41.742525   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:27:41.754099   13384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:27:41.754183   13384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:27:42.165333   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:42.246037   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:27:42.246063   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:27:42.965366   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:27:42.965434   13384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:27:43.255238   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:27:43.255278   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:27:43.448883   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:43.458924   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:27:43.459008   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:27:43.965948   13384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:43.965973   13384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:27:44.651471   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:44.757928   13384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:27:44.758054   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:44.782155   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:45.450906   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:45.954437   13384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:27:46.355200   13384 addons.go:234] Setting addon gcp-auth=true in "addons-794116"
	I0913 23:27:46.355287   13384 host.go:66] Checking if "addons-794116" exists ...
	I0913 23:27:46.355928   13384 cli_runner.go:164] Run: docker container inspect addons-794116 --format={{.State.Status}}
	I0913 23:27:46.375782   13384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:27:46.375826   13384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794116
	I0913 23:27:46.393110   13384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/addons-794116/id_rsa Username:docker}
	I0913 23:27:47.453222   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:47.662003   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.414505407s)
	I0913 23:27:47.662041   13384 addons.go:475] Verifying addon ingress=true in "addons-794116"
	I0913 23:27:47.662320   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.318503642s)
	I0913 23:27:47.664104   13384 out.go:177] * Verifying ingress addon...
	I0913 23:27:47.667942   13384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 23:27:47.746826   13384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 23:27:47.746910   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:48.248039   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:48.749773   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.248152   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.752669   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.951823   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:50.252118   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:50.456263   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.110175552s)
	I0913 23:27:50.456440   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.093433145s)
	I0913 23:27:50.456533   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.002168223s)
	I0913 23:27:50.456621   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.905374564s)
	I0913 23:27:50.456707   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.798021232s)
	I0913 23:27:50.456980   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.206260979s)
	I0913 23:27:50.457044   13384 addons.go:475] Verifying addon registry=true in "addons-794116"
	I0913 23:27:50.457444   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.100002087s)
	I0913 23:27:50.457666   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.412830848s)
	I0913 23:27:50.457788   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.804533523s)
	I0913 23:27:50.458113   13384 addons.go:475] Verifying addon metrics-server=true in "addons-794116"
	I0913 23:27:50.457881   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.415476021s)
	I0913 23:27:50.457959   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.292530803s)
	W0913 23:27:50.458155   13384 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:50.458248   13384 retry.go:31] will retry after 183.964038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:50.459212   13384 out.go:177] * Verifying registry addon...
	I0913 23:27:50.459295   13384 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-794116 service yakd-dashboard -n yakd-dashboard
	
	I0913 23:27:50.463311   13384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:27:50.467289   13384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:27:50.467784   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:50.643141   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:50.747551   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:51.045122   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:51.173277   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:51.466934   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:51.746445   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:51.954682   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.303107976s)
	I0913 23:27:51.954718   13384 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-794116"
	I0913 23:27:51.954758   13384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.578946605s)
	I0913 23:27:51.956478   13384 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:27:51.956479   13384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:51.961642   13384 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:27:51.962336   13384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:27:51.963571   13384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:27:51.963630   13384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:27:51.968010   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:51.969874   13384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:27:51.969897   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:52.062610   13384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:27:52.062636   13384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:27:52.085261   13384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:27:52.085280   13384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:27:52.161266   13384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:27:52.172646   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:52.446784   13384 pod_ready.go:98] pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-13 23:27:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-13 23:27:40 +0000 UTC,FinishedAt:2024-09-13 23:27:51 +0000 UTC,ContainerID:docker://b82c283fcb278732a0b6dee3a9f940d92f30e9b38a7687a2f1759fa8f3bd42c5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b82c283fcb278732a0b6dee3a9f940d92f30e9b38a7687a2f1759fa8f3bd42c5 Started:0xc000d7f7d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006c8ce0} {Name:kube-api-access-bwvpn MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc0006c8d00}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0913 23:27:52.446817   13384 pod_ready.go:82] duration metric: took 13.077037935s for pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace to be "Ready" ...
	E0913 23:27:52.446830   13384 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-qq29j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 23:27:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-13 23:27:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-13 23:27:40 +0000 UTC,FinishedAt:2024-09-13 23:27:51 +0000 UTC,ContainerID:docker://b82c283fcb278732a0b6dee3a9f940d92f30e9b38a7687a2f1759fa8f3bd42c5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b82c283fcb278732a0b6dee3a9f940d92f30e9b38a7687a2f1759fa8f3bd42c5 Started:0xc000d7f7d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006c8ce0} {Name:kube-api-access-bwvpn MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0006c8d00}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0913 23:27:52.446842   13384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace to be "Ready" ...
	I0913 23:27:52.466815   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:52.467265   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:52.743413   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:52.967897   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:53.044248   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:53.166888   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.523692357s)
	I0913 23:27:53.243918   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:53.466892   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:53.566114   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:53.582848   13384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.421532449s)
	I0913 23:27:53.585148   13384 addons.go:475] Verifying addon gcp-auth=true in "addons-794116"
	I0913 23:27:53.587641   13384 out.go:177] * Verifying gcp-auth addon...
	I0913 23:27:53.589631   13384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:27:53.666419   13384 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:27:53.672332   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:53.966315   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:53.966955   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:54.171592   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:54.451990   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:54.466589   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:54.467195   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:54.694038   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:54.966723   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:54.967037   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:55.172297   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:55.466219   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:55.466674   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:55.672687   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:55.966900   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:55.967296   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:56.172103   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:56.453183   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:56.466812   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:56.467275   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:56.695487   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:56.966176   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:56.967201   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:57.171414   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:57.467113   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:57.468630   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:57.671704   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:57.966437   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:57.966446   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:58.171769   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:58.466816   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.467765   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:58.672281   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:58.952399   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:58.966344   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.966854   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:59.171380   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:59.466902   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:59.467312   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:59.671851   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:59.966539   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:59.966836   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:00.172081   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:00.466707   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.466773   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:00.694879   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:00.966423   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.967431   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.172707   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:01.452052   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:01.466615   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.466825   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.694628   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:01.966713   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.967085   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.171519   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:02.468285   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.468791   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.671981   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:02.966492   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.966585   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.171610   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:03.466723   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.566149   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.671239   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:03.952887   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:03.966195   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.966769   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.173788   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:04.467164   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.467711   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.672326   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:04.966649   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.967518   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.172524   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:05.466221   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:05.467059   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.672126   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:05.953176   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:05.966709   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:05.967059   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.171583   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:06.466344   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.466707   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.672449   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:06.967916   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.969180   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.172323   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:07.467450   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.468259   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:07.672665   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:07.966196   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:07.966220   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.174483   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:08.467136   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:08.468150   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.469056   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.696723   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:08.966844   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.967137   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.172038   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:09.467321   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.468755   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.672543   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:09.966692   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.967363   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.172098   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:10.466658   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.467201   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.672734   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:10.952666   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:10.967020   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.967085   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.172305   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.466625   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.467210   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.671890   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.966527   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.966847   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.172326   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.466497   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.467124   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.672753   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.966327   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.966590   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.172197   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.452335   13384 pod_ready.go:103] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:13.466728   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.467233   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.672722   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.952994   13384 pod_ready.go:93] pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:13.953020   13384 pod_ready.go:82] duration metric: took 21.506168334s for pod "coredns-7c65d6cfc9-sh6nk" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.953034   13384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.957879   13384 pod_ready.go:93] pod "etcd-addons-794116" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:13.957899   13384 pod_ready.go:82] duration metric: took 4.857968ms for pod "etcd-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.957912   13384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.962429   13384 pod_ready.go:93] pod "kube-apiserver-addons-794116" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:13.962477   13384 pod_ready.go:82] duration metric: took 4.531205ms for pod "kube-apiserver-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.962486   13384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.966269   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.966511   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.966903   13384 pod_ready.go:93] pod "kube-controller-manager-addons-794116" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:13.966924   13384 pod_ready.go:82] duration metric: took 4.432041ms for pod "kube-controller-manager-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.966934   13384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ssdhx" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.971456   13384 pod_ready.go:93] pod "kube-proxy-ssdhx" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:13.971480   13384 pod_ready.go:82] duration metric: took 4.535922ms for pod "kube-proxy-ssdhx" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:13.971488   13384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:14.172607   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.350670   13384 pod_ready.go:93] pod "kube-scheduler-addons-794116" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:14.350698   13384 pod_ready.go:82] duration metric: took 379.201694ms for pod "kube-scheduler-addons-794116" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:14.350712   13384 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4bdd4" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:14.466615   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.467261   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.672230   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.750811   13384 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4bdd4" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:14.750836   13384 pod_ready.go:82] duration metric: took 400.116738ms for pod "nvidia-device-plugin-daemonset-4bdd4" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:14.750845   13384 pod_ready.go:39] duration metric: took 35.394197309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:14.750864   13384 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:14.750921   13384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:14.765020   13384 api_server.go:72] duration metric: took 37.107931497s to wait for apiserver process to appear ...
	I0913 23:28:14.765045   13384 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:14.765065   13384 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0913 23:28:14.768727   13384 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0913 23:28:14.769640   13384 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:14.769665   13384 api_server.go:131] duration metric: took 4.613297ms to wait for apiserver health ...
	I0913 23:28:14.769679   13384 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:14.956404   13384 system_pods.go:59] 18 kube-system pods found
	I0913 23:28:14.956437   13384 system_pods.go:61] "coredns-7c65d6cfc9-sh6nk" [8e308f00-c8d6-4392-a336-8615ac072fa0] Running
	I0913 23:28:14.956446   13384 system_pods.go:61] "csi-hostpath-attacher-0" [df4d5c0f-83db-4b83-bd18-5a59a20aa374] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:14.956453   13384 system_pods.go:61] "csi-hostpath-resizer-0" [07b18f2d-e4b3-465b-9d9e-6a498955529c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:14.956460   13384 system_pods.go:61] "csi-hostpathplugin-bfdr2" [0f066616-a108-43e1-b9aa-34f7e860b2c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:14.956464   13384 system_pods.go:61] "etcd-addons-794116" [6d616382-87d8-4323-bd84-5162702f5c13] Running
	I0913 23:28:14.956471   13384 system_pods.go:61] "kube-apiserver-addons-794116" [ac1afe8c-5d7d-4aa9-901b-98836a7ee2f2] Running
	I0913 23:28:14.956475   13384 system_pods.go:61] "kube-controller-manager-addons-794116" [f1b2fabb-82ec-4c87-8c64-c2abc548f80f] Running
	I0913 23:28:14.956481   13384 system_pods.go:61] "kube-ingress-dns-minikube" [bc6d8257-6483-4171-a6be-d6104a246881] Running
	I0913 23:28:14.956484   13384 system_pods.go:61] "kube-proxy-ssdhx" [bc334323-1000-4d4a-924e-e123e244817d] Running
	I0913 23:28:14.956488   13384 system_pods.go:61] "kube-scheduler-addons-794116" [4d927762-586d-43cf-b3f9-f429702cbe44] Running
	I0913 23:28:14.956493   13384 system_pods.go:61] "metrics-server-84c5f94fbc-nvvcp" [0d109fff-d448-40b7-8f31-d74ccc5dc0a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:14.956499   13384 system_pods.go:61] "nvidia-device-plugin-daemonset-4bdd4" [02179234-f14e-40d1-ad09-9dfc38705284] Running
	I0913 23:28:14.956504   13384 system_pods.go:61] "registry-66c9cd494c-gfmv4" [0f09071c-4485-4e48-a170-d531d56fd35c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:14.956508   13384 system_pods.go:61] "registry-proxy-chxww" [f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:14.956517   13384 system_pods.go:61] "snapshot-controller-56fcc65765-hzxxc" [803d5cfe-d912-4c76-a86e-a7c08f9ee8f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:14.956525   13384 system_pods.go:61] "snapshot-controller-56fcc65765-s92x7" [a96a3d64-0e82-4965-b116-5d8b96181760] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:14.956529   13384 system_pods.go:61] "storage-provisioner" [30354958-21c2-4f13-af4f-fcce4f4a7cde] Running
	I0913 23:28:14.956534   13384 system_pods.go:61] "tiller-deploy-b48cc5f79-cx8kt" [ac846881-5d31-4bc5-8680-f2742edc0d2f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:14.956541   13384 system_pods.go:74] duration metric: took 186.857118ms to wait for pod list to return data ...
	I0913 23:28:14.956551   13384 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:14.966684   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.967617   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.149941   13384 default_sa.go:45] found service account: "default"
	I0913 23:28:15.149967   13384 default_sa.go:55] duration metric: took 193.409779ms for default service account to be created ...
	I0913 23:28:15.149976   13384 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:15.172348   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.354785   13384 system_pods.go:86] 18 kube-system pods found
	I0913 23:28:15.354814   13384 system_pods.go:89] "coredns-7c65d6cfc9-sh6nk" [8e308f00-c8d6-4392-a336-8615ac072fa0] Running
	I0913 23:28:15.354824   13384 system_pods.go:89] "csi-hostpath-attacher-0" [df4d5c0f-83db-4b83-bd18-5a59a20aa374] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:15.354831   13384 system_pods.go:89] "csi-hostpath-resizer-0" [07b18f2d-e4b3-465b-9d9e-6a498955529c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:15.354839   13384 system_pods.go:89] "csi-hostpathplugin-bfdr2" [0f066616-a108-43e1-b9aa-34f7e860b2c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:15.354843   13384 system_pods.go:89] "etcd-addons-794116" [6d616382-87d8-4323-bd84-5162702f5c13] Running
	I0913 23:28:15.354848   13384 system_pods.go:89] "kube-apiserver-addons-794116" [ac1afe8c-5d7d-4aa9-901b-98836a7ee2f2] Running
	I0913 23:28:15.354852   13384 system_pods.go:89] "kube-controller-manager-addons-794116" [f1b2fabb-82ec-4c87-8c64-c2abc548f80f] Running
	I0913 23:28:15.354857   13384 system_pods.go:89] "kube-ingress-dns-minikube" [bc6d8257-6483-4171-a6be-d6104a246881] Running
	I0913 23:28:15.354860   13384 system_pods.go:89] "kube-proxy-ssdhx" [bc334323-1000-4d4a-924e-e123e244817d] Running
	I0913 23:28:15.354864   13384 system_pods.go:89] "kube-scheduler-addons-794116" [4d927762-586d-43cf-b3f9-f429702cbe44] Running
	I0913 23:28:15.354871   13384 system_pods.go:89] "metrics-server-84c5f94fbc-nvvcp" [0d109fff-d448-40b7-8f31-d74ccc5dc0a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:15.354874   13384 system_pods.go:89] "nvidia-device-plugin-daemonset-4bdd4" [02179234-f14e-40d1-ad09-9dfc38705284] Running
	I0913 23:28:15.354880   13384 system_pods.go:89] "registry-66c9cd494c-gfmv4" [0f09071c-4485-4e48-a170-d531d56fd35c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:15.354888   13384 system_pods.go:89] "registry-proxy-chxww" [f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:15.354896   13384 system_pods.go:89] "snapshot-controller-56fcc65765-hzxxc" [803d5cfe-d912-4c76-a86e-a7c08f9ee8f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:15.354904   13384 system_pods.go:89] "snapshot-controller-56fcc65765-s92x7" [a96a3d64-0e82-4965-b116-5d8b96181760] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:15.354910   13384 system_pods.go:89] "storage-provisioner" [30354958-21c2-4f13-af4f-fcce4f4a7cde] Running
	I0913 23:28:15.354918   13384 system_pods.go:89] "tiller-deploy-b48cc5f79-cx8kt" [ac846881-5d31-4bc5-8680-f2742edc0d2f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:15.354924   13384 system_pods.go:126] duration metric: took 204.942662ms to wait for k8s-apps to be running ...
	I0913 23:28:15.354933   13384 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:15.354982   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:15.366152   13384 system_svc.go:56] duration metric: took 11.20851ms WaitForService to wait for kubelet
	I0913 23:28:15.366182   13384 kubeadm.go:582] duration metric: took 37.709098248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:15.366204   13384 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:15.466312   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.467130   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.551068   13384 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0913 23:28:15.551093   13384 node_conditions.go:123] node cpu capacity is 8
	I0913 23:28:15.551105   13384 node_conditions.go:105] duration metric: took 184.895514ms to run NodePressure ...
	I0913 23:28:15.551115   13384 start.go:241] waiting for startup goroutines ...
	I0913 23:28:15.672383   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.966946   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.968184   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.171884   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.467057   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.467258   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.672189   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.966596   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.967033   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.172866   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.466565   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.467377   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.672833   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.966772   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.967280   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.173076   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.466299   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.467175   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.672182   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.966282   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.966490   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.171605   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.467587   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.467850   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.672524   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.966609   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.966886   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.171952   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.466189   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.466408   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.672089   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.966491   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.966734   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.171733   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.466750   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.467672   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.694746   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.966592   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.967102   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.172596   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.466906   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.466962   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.672183   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.017799   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.018519   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.172258   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.466449   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.466566   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.672146   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.966952   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.967142   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.171236   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.466347   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.466515   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.695168   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.966244   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.966834   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.171556   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.466330   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.466871   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.671607   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.967227   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.967340   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.172283   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.467063   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.467433   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.672236   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.966636   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.966954   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.172690   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.466469   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.466634   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.672033   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.966832   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.967589   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.172305   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.466479   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.466761   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.672105   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.966644   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.967469   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.171794   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.466361   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.466703   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.672240   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.966684   13384 kapi.go:107] duration metric: took 39.503374263s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:29.967178   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.171962   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.466413   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.672503   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.969178   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.172457   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.467095   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.671585   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.967283   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.171842   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.466636   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.672330   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.969766   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.174756   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.467039   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.673171   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.967164   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.172408   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.466529   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.672255   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.966832   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.171527   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.466666   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.671921   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.966752   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.171781   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.467692   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.672492   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.966728   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.171646   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.466993   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.671924   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.966122   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.172847   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.467257   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.672914   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.966421   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.171582   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.467296   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.672672   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.967017   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.194606   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.467469   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.672429   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.967010   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.203728   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.467457   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.672300   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.967086   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.194335   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.468346   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.673186   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.967060   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.172635   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.466835   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.671434   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.966799   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.172532   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.466861   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.672156   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.966229   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.172565   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.467074   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.671513   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.967124   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.172532   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.467113   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.694892   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.967363   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.171849   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.468145   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.694340   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.966671   13384 kapi.go:107] duration metric: took 56.004334338s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:28:48.171165   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.672036   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.172087   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.672525   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.171437   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.694303   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.171612   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.672641   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.172663   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.744871   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.172399   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.672032   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.173432   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.672527   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.172069   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.672136   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.171879   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.694964   13384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.171723   13384 kapi.go:107] duration metric: took 1m9.503780357s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 23:29:17.093566   13384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:29:17.093585   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.593578   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.092500   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.593644   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.092765   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.593275   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.093149   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.593649   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.092904   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.594126   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.093277   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.593373   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.093753   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.593625   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.092639   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.593043   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.093356   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.592944   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.093400   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.593271   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.093730   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.593421   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.092402   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.593684   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.093027   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.592325   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:30.093266   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:30.593103   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:31.093553   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:31.593832   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:32.093029   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:32.593630   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:33.093081   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:33.593794   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:34.093019   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:34.592695   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:35.092907   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:35.593588   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:36.093382   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:36.593589   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:37.093727   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:37.593509   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:38.093485   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:38.593327   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:39.093883   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:39.593552   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:40.093300   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:40.593406   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:41.093735   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:41.593549   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:42.092848   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:42.593291   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:43.093724   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:43.594038   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:44.093279   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:44.593356   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:45.092925   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:45.592980   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:46.093016   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:46.593208   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:47.093370   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:47.593214   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:48.093375   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:48.593795   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:49.093285   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:49.593130   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:50.092950   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:50.592782   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:51.093670   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:51.594055   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:52.093622   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:52.593387   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:53.093823   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:53.594203   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:54.092433   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:54.593548   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:55.092802   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:55.593872   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:56.093799   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:56.592640   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:57.094191   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:57.593432   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:58.093874   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:58.593889   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:59.093242   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:59.593516   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:00.093162   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:00.593795   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:01.093918   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:01.593252   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:02.094020   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:02.593824   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:03.093939   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:03.592978   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:04.093988   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:04.592921   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:05.093795   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:05.593649   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:06.093662   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:06.593967   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:07.093170   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:07.593309   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:08.093423   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:08.593619   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:09.092971   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:09.592924   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:10.093038   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:10.593116   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:11.093478   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:11.594029   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:12.093584   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:12.593507   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:13.093854   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:13.593864   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:14.093710   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:14.592566   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:15.092829   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:15.592701   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:16.093056   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:16.593269   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:17.093230   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:17.593355   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:18.092565   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:18.592718   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:19.093294   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:19.593179   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:20.093497   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:20.593444   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:21.093872   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:21.593958   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:22.093167   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:22.593396   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:23.093857   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:23.592844   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:24.093632   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:24.593354   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:25.093875   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:25.593635   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:26.093425   13384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:26.593654   13384 kapi.go:107] duration metric: took 2m33.004020241s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:30:26.596044   13384 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-794116 cluster.
	I0913 23:30:26.597791   13384 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:30:26.599672   13384 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:30:26.601644   13384 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, volcano, ingress-dns, cloud-spanner, storage-provisioner, helm-tiller, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0913 23:30:26.603076   13384 addons.go:510] duration metric: took 2m48.945995664s for enable addons: enabled=[nvidia-device-plugin default-storageclass volcano ingress-dns cloud-spanner storage-provisioner helm-tiller metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0913 23:30:26.603123   13384 start.go:246] waiting for cluster config update ...
	I0913 23:30:26.603153   13384 start.go:255] writing updated cluster config ...
	I0913 23:30:26.603436   13384 ssh_runner.go:195] Run: rm -f paused
	I0913 23:30:26.653138   13384 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:30:26.655801   13384 out.go:177] * Done! kubectl is now configured to use "addons-794116" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 23:39:55 addons-794116 cri-dockerd[1607]: time="2024-09-13T23:39:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa43a0bf38647cd31cf7bbb47cda4559896fc9d07b334d852417ec646dee84db/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 13 23:39:55 addons-794116 cri-dockerd[1607]: time="2024-09-13T23:39:55Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 13 23:39:56 addons-794116 dockerd[1342]: time="2024-09-13T23:39:56.678664775Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:39:56 addons-794116 dockerd[1342]: time="2024-09-13T23:39:56.678664773Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:39:56 addons-794116 dockerd[1342]: time="2024-09-13T23:39:56.680708032Z" level=error msg="Error running exec 3ac0b1e1c4db32508f1cffc972dc7ecdd346bf097322c6c0165076adeada6802 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 13 23:39:56 addons-794116 dockerd[1342]: time="2024-09-13T23:39:56.880710502Z" level=info msg="ignoring event" container=6ca504e3289df78e3ab488d3cdda0a3c72017222fbae51d9732fc56e5e211db4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:58 addons-794116 cri-dockerd[1607]: time="2024-09-13T23:39:58Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 13 23:39:58 addons-794116 dockerd[1342]: time="2024-09-13T23:39:58.780250801Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:58 addons-794116 dockerd[1342]: time="2024-09-13T23:39:58.782470499Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:59 addons-794116 dockerd[1342]: time="2024-09-13T23:39:59.043069620Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=6058bddc21cebec8a04d1cf4b69bb2dec9d441c4a4769f9f05387e3e926e7632
	Sep 13 23:39:59 addons-794116 dockerd[1342]: time="2024-09-13T23:39:59.066019495Z" level=info msg="ignoring event" container=6058bddc21cebec8a04d1cf4b69bb2dec9d441c4a4769f9f05387e3e926e7632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:59 addons-794116 dockerd[1342]: time="2024-09-13T23:39:59.187885565Z" level=info msg="ignoring event" container=16364e8a8e63689728ebb339aea96afa0198b902c8fd531d154940931ea3ad30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:59 addons-794116 dockerd[1342]: time="2024-09-13T23:39:59.258397739Z" level=info msg="ignoring event" container=33848f6605e83cf38195fc56e76cafaabb3e7a2dbd83fedde537ca4fefbd6a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:05 addons-794116 cri-dockerd[1607]: time="2024-09-13T23:40:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/78198a3add2c5dab4c47a24126526a5b088a54405ebedd13f42b51fee27b1b0d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 13 23:40:05 addons-794116 dockerd[1342]: time="2024-09-13T23:40:05.936540982Z" level=info msg="ignoring event" container=e7e374c7c4d9e57190cf279a772475d580896354e40ef0a54326c9d7e4080bda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:05 addons-794116 dockerd[1342]: time="2024-09-13T23:40:05.982536255Z" level=info msg="ignoring event" container=ca2d2b720ed70170c7a3af031a9e58770fad934a935c8ca143108d956d9ab37b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:07 addons-794116 cri-dockerd[1607]: time="2024-09-13T23:40:07Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 13 23:40:09 addons-794116 dockerd[1342]: time="2024-09-13T23:40:09.858588502Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909
	Sep 13 23:40:09 addons-794116 dockerd[1342]: time="2024-09-13T23:40:09.917392095Z" level=info msg="ignoring event" container=0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:10 addons-794116 dockerd[1342]: time="2024-09-13T23:40:10.068977079Z" level=info msg="ignoring event" container=deb0d25b87fae9eeb6689cb3d7f93f39779646d54ca64014ed99fd5c4f94766b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-794116 dockerd[1342]: time="2024-09-13T23:40:18.161107350Z" level=info msg="ignoring event" container=5788be0a541ca2b1d19f926c5d649f9cab06dbd0d11f24fc9271dd99da696d63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-794116 dockerd[1342]: time="2024-09-13T23:40:18.666937020Z" level=info msg="ignoring event" container=56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-794116 dockerd[1342]: time="2024-09-13T23:40:18.755361685Z" level=info msg="ignoring event" container=8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-794116 dockerd[1342]: time="2024-09-13T23:40:18.824685524Z" level=info msg="ignoring event" container=77881dfebcdf59cbc298dfe797f5171fe9cf40a180707d0d3847915e9adaccaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-794116 dockerd[1342]: time="2024-09-13T23:40:18.925923982Z" level=info msg="ignoring event" container=24a2f08f38fda7be55c978f3130000e954abaa8f90fcb430f7ebedb60d31496c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	67edd3b012855       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  12 seconds ago      Running             hello-world-app           0                   78198a3add2c5       hello-world-app-55bf9c44b4-ml7wb
	d84ee93502a04       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                21 seconds ago      Running             nginx                     0                   aa43a0bf38647       nginx
	804d8cbaf01f4       a416a98b71e22                                                                                                                51 seconds ago      Exited              helper-pod                0                   6a11a154763d8       helper-pod-delete-pvc-6b1450aa-8425-40e9-a121-ba8dd1de215e
	be0509575f77e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   cf8a10a0ea851       gcp-auth-89d5ffd79-stlrf
	9d523912d9d27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   892996805ad55       ingress-nginx-admission-patch-lmqxq
	c69b93593535a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   6f122dd6cb61c       ingress-nginx-admission-create-8htpj
	7df3761985c43       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   ece6fbfc7e5f1       storage-provisioner
	4838dc100cd2e       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   f3dc729632ace       coredns-7c65d6cfc9-sh6nk
	0c4bc70da4b09       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   d46e7bea5f9e4       kube-proxy-ssdhx
	da914588a1926       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   f034be59877e2       etcd-addons-794116
	a1fd5e27686b7       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   7f5a25984aef5       kube-apiserver-addons-794116
	f486296d84ea6       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   3082e388bcdaa       kube-scheduler-addons-794116
	d610571bfc7bf       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   302302b5eafa2       kube-controller-manager-addons-794116
	
	
	==> coredns [4838dc100cd2] <==
	[INFO] 10.244.0.9:43946 - 30013 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117179s
	[INFO] 10.244.0.9:39985 - 58315 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072278s
	[INFO] 10.244.0.9:39985 - 20431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111573s
	[INFO] 10.244.0.9:60966 - 43401 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004888396s
	[INFO] 10.244.0.9:60966 - 48013 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.006281633s
	[INFO] 10.244.0.9:38352 - 55485 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004319561s
	[INFO] 10.244.0.9:38352 - 64958 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004406885s
	[INFO] 10.244.0.9:43358 - 12163 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004346011s
	[INFO] 10.244.0.9:43358 - 10630 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004669968s
	[INFO] 10.244.0.9:42333 - 8006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062268s
	[INFO] 10.244.0.9:42333 - 35653 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102764s
	[INFO] 10.244.0.26:38147 - 50907 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00033364s
	[INFO] 10.244.0.26:41524 - 12111 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000404747s
	[INFO] 10.244.0.26:36134 - 26032 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140402s
	[INFO] 10.244.0.26:53047 - 50919 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00019173s
	[INFO] 10.244.0.26:49106 - 7615 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092295s
	[INFO] 10.244.0.26:55456 - 42516 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152778s
	[INFO] 10.244.0.26:46794 - 14809 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008159742s
	[INFO] 10.244.0.26:55664 - 51047 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009567764s
	[INFO] 10.244.0.26:48991 - 59276 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006965088s
	[INFO] 10.244.0.26:60792 - 2858 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008014326s
	[INFO] 10.244.0.26:43251 - 38676 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005361184s
	[INFO] 10.244.0.26:52245 - 6779 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005888296s
	[INFO] 10.244.0.26:49539 - 37788 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.00193958s
	[INFO] 10.244.0.26:40697 - 59604 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002108012s
	
	
	==> describe nodes <==
	Name:               addons-794116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-794116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-794116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-794116
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-794116
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:40:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:40:07 +0000   Fri, 13 Sep 2024 23:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:40:07 +0000   Fri, 13 Sep 2024 23:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:40:07 +0000   Fri, 13 Sep 2024 23:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:40:07 +0000   Fri, 13 Sep 2024 23:27:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-794116
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	System Info:
	  Machine ID:                 e087975bdc0f4d19bbf51db5d21375de
	  System UUID:                c8253a2f-18f2-4c02-8e7c-501ee2982979
	  Boot ID:                    7c833354-6da6-42b8-a687-6ba7895616fb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-ml7wb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  gcp-auth                    gcp-auth-89d5ffd79-stlrf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-sh6nk                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-794116                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-794116             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-794116    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ssdhx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-794116             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-794116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-794116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-794116 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-794116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-794116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-794116 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-794116 event: Registered Node addons-794116 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 8f 5b 64 25 1f 08 06
	[  +2.281823] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 1f ae b5 60 38 08 06
	[  +6.162619] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 f1 62 c3 18 7a 08 06
	[  +0.008560] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 eb b7 c2 dd d9 08 06
	[  +0.126699] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 02 0d a3 5a 1a 08 06
	[ +11.211566] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 1a 90 a4 23 bb 08 06
	[  +1.073370] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 80 ea d7 0e 8b 08 06
	[Sep13 23:29] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da d1 02 1f 8c ac 08 06
	[  +0.270156] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 20 d5 81 d0 16 08 06
	[Sep13 23:30] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 82 85 14 e3 87 08 06
	[  +0.000509] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 39 80 4f cf 28 08 06
	[Sep13 23:39] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 e1 11 b4 8c c0 08 06
	[Sep13 23:40] IPv4: martian source 10.244.0.37 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 1a 90 a4 23 bb 08 06
	
	
	==> etcd [da914588a192] <==
	{"level":"info","ts":"2024-09-13T23:27:28.277607Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-794116 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:27:28.277679Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:28.277659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:28.277757Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:28.277836Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:28.277850Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:28.278455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:28.278575Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:28.278597Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:28.278840Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:28.278916Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:28.279929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-13T23:27:28.279934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T23:27:41.046381Z","caller":"traceutil/trace.go:171","msg":"trace[1155656943] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"182.689028ms","start":"2024-09-13T23:27:40.863674Z","end":"2024-09-13T23:27:41.046363Z","steps":["trace[1155656943] 'process raft request'  (duration: 178.340864ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:27:41.047087Z","caller":"traceutil/trace.go:171","msg":"trace[1316566703] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"100.372436ms","start":"2024-09-13T23:27:40.946701Z","end":"2024-09-13T23:27:41.047073Z","steps":["trace[1316566703] 'process raft request'  (duration: 100.082934ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:27:45.043820Z","caller":"traceutil/trace.go:171","msg":"trace[1300249400] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"101.704839ms","start":"2024-09-13T23:27:44.942094Z","end":"2024-09-13T23:27:45.043799Z","steps":["trace[1300249400] 'process raft request'  (duration: 100.949509ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:27:45.045874Z","caller":"traceutil/trace.go:171","msg":"trace[723066149] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:621; }","duration":"100.042114ms","start":"2024-09-13T23:27:44.945817Z","end":"2024-09-13T23:27:45.045859Z","steps":["trace[723066149] 'read index received'  (duration: 97.353862ms)","trace[723066149] 'applied index is now lower than readState.Index'  (duration: 2.687588ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T23:27:45.045962Z","caller":"traceutil/trace.go:171","msg":"trace[893760601] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"100.264032ms","start":"2024-09-13T23:27:44.945688Z","end":"2024-09-13T23:27:45.045952Z","steps":["trace[893760601] 'process raft request'  (duration: 100.110617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:27:45.046166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.327695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-qq29j\" ","response":"range_response_count:1 size:5103"}
	{"level":"info","ts":"2024-09-13T23:27:45.046217Z","caller":"traceutil/trace.go:171","msg":"trace[1621282902] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-qq29j; range_end:; response_count:1; response_revision:614; }","duration":"100.393573ms","start":"2024-09-13T23:27:44.945813Z","end":"2024-09-13T23:27:45.046207Z","steps":["trace[1621282902] 'agreement among raft nodes before linearized reading'  (duration: 100.24951ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:27:45.046279Z","caller":"traceutil/trace.go:171","msg":"trace[1092276014] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"103.96721ms","start":"2024-09-13T23:27:44.942303Z","end":"2024-09-13T23:27:45.046270Z","steps":["trace[1092276014] 'process raft request'  (duration: 103.419124ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:28.869572Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1918}
	{"level":"info","ts":"2024-09-13T23:37:28.892844Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1918,"took":"22.698441ms","hash":3347508936,"current-db-size-bytes":9199616,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":5066752,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-13T23:37:28.892887Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3347508936,"revision":1918,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T23:39:33.998937Z","caller":"traceutil/trace.go:171","msg":"trace[53261662] transaction","detail":"{read_only:false; response_revision:2736; number_of_response:1; }","duration":"119.823514ms","start":"2024-09-13T23:39:33.879089Z","end":"2024-09-13T23:39:33.998913Z","steps":["trace[53261662] 'process raft request'  (duration: 59.70237ms)","trace[53261662] 'compare'  (duration: 60.003418ms)"],"step_count":2}
	
	
	==> gcp-auth [be0509575f77] <==
	2024/09/13 23:31:05 Ready to write response ...
	2024/09/13 23:39:12 Ready to marshal response ...
	2024/09/13 23:39:12 Ready to write response ...
	2024/09/13 23:39:18 Ready to marshal response ...
	2024/09/13 23:39:18 Ready to write response ...
	2024/09/13 23:39:18 Ready to marshal response ...
	2024/09/13 23:39:18 Ready to write response ...
	2024/09/13 23:39:18 Ready to marshal response ...
	2024/09/13 23:39:18 Ready to write response ...
	2024/09/13 23:39:20 Ready to marshal response ...
	2024/09/13 23:39:20 Ready to write response ...
	2024/09/13 23:39:28 Ready to marshal response ...
	2024/09/13 23:39:28 Ready to write response ...
	2024/09/13 23:39:29 Ready to marshal response ...
	2024/09/13 23:39:29 Ready to write response ...
	2024/09/13 23:39:29 Ready to marshal response ...
	2024/09/13 23:39:29 Ready to write response ...
	2024/09/13 23:39:29 Ready to marshal response ...
	2024/09/13 23:39:29 Ready to write response ...
	2024/09/13 23:39:38 Ready to marshal response ...
	2024/09/13 23:39:38 Ready to write response ...
	2024/09/13 23:39:54 Ready to marshal response ...
	2024/09/13 23:39:54 Ready to write response ...
	2024/09/13 23:40:05 Ready to marshal response ...
	2024/09/13 23:40:05 Ready to write response ...
	
	
	==> kernel <==
	 23:40:19 up 22 min,  0 users,  load average: 0.83, 0.43, 0.30
	Linux addons-794116 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [a1fd5e27686b] <==
	W0913 23:30:57.654773       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 23:30:57.767507       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 23:30:58.148023       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0913 23:39:29.281041       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0913 23:39:29.344238       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 23:39:29.351687       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 23:39:29.358348       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0913 23:39:29.458227       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.204.59"}
	E0913 23:39:44.358317       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0913 23:39:54.077763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:39:54.077823       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:39:54.142224       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:39:54.142280       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:39:54.157915       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:39:54.157967       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:39:54.167786       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:39:54.167822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:39:54.629605       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 23:39:54.794846       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.145.216"}
	W0913 23:39:55.142894       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 23:39:55.168834       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0913 23:39:55.176683       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0913 23:39:58.493275       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 23:39:59.551305       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 23:40:05.295583       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.152.25"}
	
	
	==> kube-controller-manager [d610571bfc7b] <==
	E0913 23:40:05.342061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:40:06.818431       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 23:40:06.819863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.562µs"
	I0913 23:40:06.822146       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0913 23:40:07.271802       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0913 23:40:07.271839       1 shared_informer.go:320] Caches are synced for resource quota
	W0913 23:40:07.435694       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:07.435742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:40:07.453972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-794116"
	I0913 23:40:07.576868       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0913 23:40:07.576907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 23:40:08.379499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.786231ms"
	I0913 23:40:08.379585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.198µs"
	I0913 23:40:08.604408       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0913 23:40:13.101506       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:13.101577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:13.271839       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:13.271892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:15.286626       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:15.286668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:40:16.765663       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0913 23:40:16.985050       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0913 23:40:17.129400       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:17.129437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:40:18.611989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.753µs"
	
	
	==> kube-proxy [0c4bc70da4b0] <==
	I0913 23:27:41.358155       1 server_linux.go:66] "Using iptables proxy"
	I0913 23:27:41.855130       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0913 23:27:41.855205       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:27:42.255086       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 23:27:42.255153       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:27:42.265039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:27:42.265443       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:27:42.265468       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:27:42.342393       1 config.go:199] "Starting service config controller"
	I0913 23:27:42.342416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:27:42.342447       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:27:42.342455       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:27:42.351832       1 config.go:328] "Starting node config controller"
	I0913 23:27:42.351861       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:27:42.443075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:27:42.443137       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:27:42.452413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f486296d84ea] <==
	W0913 23:27:30.064113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 23:27:30.064218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:30.064295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0913 23:27:30.064320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:27:30.064321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:30.064341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 23:27:30.064376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0913 23:27:30.064342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:30.967057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:30.967099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:30.998407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:30.998445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.002846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 23:27:31.002901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.089987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 23:27:31.090024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.115530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:31.115577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.148843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:31.148890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.237187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 23:27:31.237241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:31.363664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 23:27:31.363704       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 23:27:34.460724       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:40:10 addons-794116 kubelet[2449]: I0913 23:40:10.389606    2449 scope.go:117] "RemoveContainer" containerID="0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909"
	Sep 13 23:40:10 addons-794116 kubelet[2449]: I0913 23:40:10.403671    2449 scope.go:117] "RemoveContainer" containerID="0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909"
	Sep 13 23:40:10 addons-794116 kubelet[2449]: E0913 23:40:10.404415    2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909" containerID="0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909"
	Sep 13 23:40:10 addons-794116 kubelet[2449]: I0913 23:40:10.404468    2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909"} err="failed to get container status \"0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0cb2f67069b12047697724e26706d23584334c148f854c2a67e0a66751b00909"
	Sep 13 23:40:11 addons-794116 kubelet[2449]: E0913 23:40:11.252464    2449 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="36d98053-723d-4edc-a338-f1d737f47cea"
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.370707    2449 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrr96\" (UniqueName: \"kubernetes.io/projected/5c820258-f11d-4487-96d8-8e944543e1ba-kube-api-access-mrr96\") pod \"5c820258-f11d-4487-96d8-8e944543e1ba\" (UID: \"5c820258-f11d-4487-96d8-8e944543e1ba\") "
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.370754    2449 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5c820258-f11d-4487-96d8-8e944543e1ba-gcp-creds\") pod \"5c820258-f11d-4487-96d8-8e944543e1ba\" (UID: \"5c820258-f11d-4487-96d8-8e944543e1ba\") "
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.370837    2449 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c820258-f11d-4487-96d8-8e944543e1ba-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5c820258-f11d-4487-96d8-8e944543e1ba" (UID: "5c820258-f11d-4487-96d8-8e944543e1ba"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.372552    2449 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c820258-f11d-4487-96d8-8e944543e1ba-kube-api-access-mrr96" (OuterVolumeSpecName: "kube-api-access-mrr96") pod "5c820258-f11d-4487-96d8-8e944543e1ba" (UID: "5c820258-f11d-4487-96d8-8e944543e1ba"). InnerVolumeSpecName "kube-api-access-mrr96". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.470937    2449 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mrr96\" (UniqueName: \"kubernetes.io/projected/5c820258-f11d-4487-96d8-8e944543e1ba-kube-api-access-mrr96\") on node \"addons-794116\" DevicePath \"\""
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.470973    2449 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5c820258-f11d-4487-96d8-8e944543e1ba-gcp-creds\") on node \"addons-794116\" DevicePath \"\""
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.973708    2449 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gp5x\" (UniqueName: \"kubernetes.io/projected/0f09071c-4485-4e48-a170-d531d56fd35c-kube-api-access-5gp5x\") pod \"0f09071c-4485-4e48-a170-d531d56fd35c\" (UID: \"0f09071c-4485-4e48-a170-d531d56fd35c\") "
	Sep 13 23:40:18 addons-794116 kubelet[2449]: I0913 23:40:18.978151    2449 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f09071c-4485-4e48-a170-d531d56fd35c-kube-api-access-5gp5x" (OuterVolumeSpecName: "kube-api-access-5gp5x") pod "0f09071c-4485-4e48-a170-d531d56fd35c" (UID: "0f09071c-4485-4e48-a170-d531d56fd35c"). InnerVolumeSpecName "kube-api-access-5gp5x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.074418    2449 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfzt5\" (UniqueName: \"kubernetes.io/projected/f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3-kube-api-access-zfzt5\") pod \"f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3\" (UID: \"f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3\") "
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.074528    2449 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5gp5x\" (UniqueName: \"kubernetes.io/projected/0f09071c-4485-4e48-a170-d531d56fd35c-kube-api-access-5gp5x\") on node \"addons-794116\" DevicePath \"\""
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.076132    2449 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3-kube-api-access-zfzt5" (OuterVolumeSpecName: "kube-api-access-zfzt5") pod "f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3" (UID: "f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3"). InnerVolumeSpecName "kube-api-access-zfzt5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.175439    2449 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zfzt5\" (UniqueName: \"kubernetes.io/projected/f8ed4610-be7a-4b55-bcd6-dbb3920e9ff3-kube-api-access-zfzt5\") on node \"addons-794116\" DevicePath \"\""
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.490535    2449 scope.go:117] "RemoveContainer" containerID="56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.509740    2449 scope.go:117] "RemoveContainer" containerID="56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: E0913 23:40:19.510628    2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb" containerID="56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.510671    2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb"} err="failed to get container status \"56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb\": rpc error: code = Unknown desc = Error response from daemon: No such container: 56416f9da8cd1bb77c3edbfa3d1fea78f0f82f42f14f047a732517c2c86c45cb"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.510699    2449 scope.go:117] "RemoveContainer" containerID="8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.545102    2449 scope.go:117] "RemoveContainer" containerID="8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: E0913 23:40:19.546149    2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811" containerID="8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811"
	Sep 13 23:40:19 addons-794116 kubelet[2449]: I0913 23:40:19.546189    2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811"} err="failed to get container status \"8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8eff0fa59ec2bbd6a61e40dfd6f44066ceadc9694e098cb8097ba6deb04d6811"
	
	
	==> storage-provisioner [7df3761985c4] <==
	I0913 23:27:46.051049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:27:46.149951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:27:46.150002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:27:46.160678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:27:46.160843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-794116_559bfd78-9809-45c1-b17f-a9ed8a04a4f1!
	I0913 23:27:46.161834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f479bc33-23a8-47cf-bd88-dadc3e9aec02", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-794116_559bfd78-9809-45c1-b17f-a9ed8a04a4f1 became leader
	I0913 23:27:46.261585       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-794116_559bfd78-9809-45c1-b17f-a9ed8a04a4f1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-794116 -n addons-794116
helpers_test.go:261: (dbg) Run:  kubectl --context addons-794116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-794116 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-794116 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-794116/192.168.49.2
	Start Time:       Fri, 13 Sep 2024 23:31:05 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2bhh4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2bhh4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-794116
	  Normal   Pulling    7m50s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m22s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.56s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 10.97
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.74
22 TestOffline 76.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.98
29 TestAddons/serial/Volcano 38.69
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 19.64
35 TestAddons/parallel/InspektorGadget 10.8
36 TestAddons/parallel/MetricsServer 5.57
37 TestAddons/parallel/HelmTiller 9.78
39 TestAddons/parallel/CSI 46.46
40 TestAddons/parallel/Headlamp 18.76
41 TestAddons/parallel/CloudSpanner 5.46
42 TestAddons/parallel/LocalPath 53.25
43 TestAddons/parallel/NvidiaDevicePlugin 5.41
44 TestAddons/parallel/Yakd 10.64
45 TestAddons/StoppedEnableDisable 5.87
46 TestCertOptions 32.18
47 TestCertExpiration 239.92
48 TestDockerFlags 26.97
49 TestForceSystemdFlag 38.68
50 TestForceSystemdEnv 31.05
52 TestKVMDriverInstallOrUpdate 4.74
56 TestErrorSpam/setup 21.21
57 TestErrorSpam/start 0.57
58 TestErrorSpam/status 0.88
59 TestErrorSpam/pause 1.15
60 TestErrorSpam/unpause 1.39
61 TestErrorSpam/stop 10.83
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 33.57
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.13
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.3
73 TestFunctional/serial/CacheCmd/cache/add_local 1.43
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 40.24
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 0.96
84 TestFunctional/serial/LogsFileCmd 1.01
85 TestFunctional/serial/InvalidService 4.75
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 13.95
89 TestFunctional/parallel/DryRun 0.49
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.11
95 TestFunctional/parallel/ServiceCmdConnect 9.73
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 36.5
99 TestFunctional/parallel/SSHCmd 0.63
100 TestFunctional/parallel/CpCmd 2.03
101 TestFunctional/parallel/MySQL 25.42
102 TestFunctional/parallel/FileSync 0.32
103 TestFunctional/parallel/CertSync 2.12
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
111 TestFunctional/parallel/License 0.75
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
114 TestFunctional/parallel/ProfileCmd/profile_list 0.42
115 TestFunctional/parallel/MountCmd/any-port 7.98
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
117 TestFunctional/parallel/Version/short 0.07
118 TestFunctional/parallel/Version/components 0.55
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
123 TestFunctional/parallel/ImageCommands/ImageBuild 4.41
124 TestFunctional/parallel/ImageCommands/Setup 1.97
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.76
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
131 TestFunctional/parallel/MountCmd/specific-port 2.24
132 TestFunctional/parallel/ServiceCmd/List 0.6
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
136 TestFunctional/parallel/ServiceCmd/Format 0.45
137 TestFunctional/parallel/ServiceCmd/URL 0.51
138 TestFunctional/parallel/MountCmd/VerifyCleanup 1.21
139 TestFunctional/parallel/DockerEnv/bash 1.27
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.2
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 99.65
161 TestMultiControlPlane/serial/DeployApp 5.69
162 TestMultiControlPlane/serial/PingHostFromPods 1.06
163 TestMultiControlPlane/serial/AddWorkerNode 20.48
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.64
166 TestMultiControlPlane/serial/CopyFile 15.78
167 TestMultiControlPlane/serial/StopSecondaryNode 11.35
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
169 TestMultiControlPlane/serial/RestartSecondaryNode 33.84
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.26
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 177.33
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.29
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.46
174 TestMultiControlPlane/serial/StopCluster 32.57
175 TestMultiControlPlane/serial/RestartCluster 80.09
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
177 TestMultiControlPlane/serial/AddSecondaryNode 38.65
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
181 TestImageBuild/serial/Setup 21.35
182 TestImageBuild/serial/NormalBuild 2.42
183 TestImageBuild/serial/BuildWithBuildArg 0.93
184 TestImageBuild/serial/BuildWithDockerIgnore 0.77
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
189 TestJSONOutput/start/Command 65.68
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.51
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.44
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.71
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
214 TestKicCustomNetwork/create_custom_network 24.53
215 TestKicCustomNetwork/use_default_bridge_network 24.09
216 TestKicExistingNetwork 23.45
217 TestKicCustomSubnet 25.63
218 TestKicStaticIP 23.7
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 54.27
223 TestMountStart/serial/StartWithMountFirst 7.81
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 7.17
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.73
231 TestMountStart/serial/VerifyMountPostStop 0.24
234 TestMultiNode/serial/FreshStart2Nodes 55.45
235 TestMultiNode/serial/DeployApp2Nodes 44.12
236 TestMultiNode/serial/PingHostFrom2Pods 0.76
237 TestMultiNode/serial/AddNode 18.99
238 TestMultiNode/serial/MultiNodeLabels 0.08
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 9.75
241 TestMultiNode/serial/StopNode 2.2
242 TestMultiNode/serial/StartAfterStop 10.57
243 TestMultiNode/serial/RestartKeepsNodes 98.7
244 TestMultiNode/serial/DeleteNode 5.19
245 TestMultiNode/serial/StopMultiNode 21.55
246 TestMultiNode/serial/RestartMultiNode 54.12
247 TestMultiNode/serial/ValidateNameConflict 24.04
252 TestPreload 138.16
254 TestScheduledStopUnix 94.84
255 TestSkaffold 105.67
257 TestInsufficientStorage 10.14
258 TestRunningBinaryUpgrade 60.18
260 TestKubernetesUpgrade 338.78
261 TestMissingContainerUpgrade 177.73
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 37.17
265 TestNoKubernetes/serial/StartWithStopK8s 17.45
266 TestNoKubernetes/serial/Start 10.11
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
268 TestNoKubernetes/serial/ProfileList 1.31
269 TestNoKubernetes/serial/Stop 1.2
270 TestNoKubernetes/serial/StartNoArgs 8.61
282 TestStoppedBinaryUpgrade/Setup 2.49
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
284 TestStoppedBinaryUpgrade/Upgrade 156.46
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.53
294 TestPause/serial/Start 70.37
295 TestNetworkPlugins/group/auto/Start 69.18
296 TestNetworkPlugins/group/flannel/Start 44.59
297 TestPause/serial/SecondStartNoReconfiguration 35.46
298 TestNetworkPlugins/group/flannel/ControllerPod 6.01
299 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
300 TestNetworkPlugins/group/flannel/NetCatPod 10.21
301 TestPause/serial/Pause 0.55
302 TestPause/serial/VerifyStatus 0.32
303 TestPause/serial/Unpause 0.52
304 TestPause/serial/PauseAgain 0.67
305 TestPause/serial/DeletePaused 2.19
306 TestNetworkPlugins/group/flannel/DNS 0.18
307 TestNetworkPlugins/group/flannel/Localhost 0.16
308 TestNetworkPlugins/group/flannel/HairPin 0.12
309 TestPause/serial/VerifyDeletedResources 0.78
310 TestNetworkPlugins/group/enable-default-cni/Start 41.64
311 TestNetworkPlugins/group/auto/KubeletFlags 0.31
312 TestNetworkPlugins/group/auto/NetCatPod 10.54
313 TestNetworkPlugins/group/auto/DNS 0.2
314 TestNetworkPlugins/group/auto/Localhost 0.13
315 TestNetworkPlugins/group/auto/HairPin 0.14
316 TestNetworkPlugins/group/bridge/Start 39.72
317 TestNetworkPlugins/group/kubenet/Start 68.54
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.36
320 TestNetworkPlugins/group/calico/Start 68.4
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
325 TestNetworkPlugins/group/bridge/NetCatPod 10.21
326 TestNetworkPlugins/group/bridge/DNS 0.2
327 TestNetworkPlugins/group/bridge/Localhost 0.15
328 TestNetworkPlugins/group/bridge/HairPin 0.19
329 TestNetworkPlugins/group/kindnet/Start 63.22
330 TestNetworkPlugins/group/custom-flannel/Start 48.79
331 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
332 TestNetworkPlugins/group/kubenet/NetCatPod 10.28
333 TestNetworkPlugins/group/kubenet/DNS 0.18
334 TestNetworkPlugins/group/kubenet/Localhost 0.14
335 TestNetworkPlugins/group/kubenet/HairPin 0.14
336 TestNetworkPlugins/group/calico/ControllerPod 6.01
337 TestNetworkPlugins/group/calico/KubeletFlags 0.34
338 TestNetworkPlugins/group/calico/NetCatPod 11.25
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/false/Start 62.93
341 TestNetworkPlugins/group/calico/DNS 0.15
342 TestNetworkPlugins/group/calico/Localhost 0.13
343 TestNetworkPlugins/group/calico/HairPin 0.13
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.2
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
347 TestNetworkPlugins/group/kindnet/NetCatPod 9.65
348 TestNetworkPlugins/group/custom-flannel/DNS 0.15
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
351 TestNetworkPlugins/group/kindnet/DNS 0.15
352 TestNetworkPlugins/group/kindnet/Localhost 0.13
353 TestNetworkPlugins/group/kindnet/HairPin 0.12
355 TestStartStop/group/old-k8s-version/serial/FirstStart 134.42
357 TestStartStop/group/no-preload/serial/FirstStart 78.66
359 TestStartStop/group/embed-certs/serial/FirstStart 71.81
360 TestNetworkPlugins/group/false/KubeletFlags 0.35
361 TestNetworkPlugins/group/false/NetCatPod 9.22
362 TestNetworkPlugins/group/false/DNS 0.17
363 TestNetworkPlugins/group/false/Localhost 0.17
364 TestNetworkPlugins/group/false/HairPin 0.16
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.27
367 TestStartStop/group/embed-certs/serial/DeployApp 9.26
368 TestStartStop/group/no-preload/serial/DeployApp 10.24
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
370 TestStartStop/group/embed-certs/serial/Stop 10.68
371 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
372 TestStartStop/group/no-preload/serial/Stop 10.67
373 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
374 TestStartStop/group/embed-certs/serial/SecondStart 263.06
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
376 TestStartStop/group/no-preload/serial/SecondStart 270.68
377 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
379 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
380 TestStartStop/group/old-k8s-version/serial/Stop 10.77
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
382 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.7
383 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
384 TestStartStop/group/old-k8s-version/serial/SecondStart 135.48
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
386 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.64
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
390 TestStartStop/group/old-k8s-version/serial/Pause 2.43
392 TestStartStop/group/newest-cni/serial/FirstStart 27.62
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
395 TestStartStop/group/newest-cni/serial/Stop 5.72
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
397 TestStartStop/group/newest-cni/serial/SecondStart 14.21
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
401 TestStartStop/group/newest-cni/serial/Pause 2.63
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
405 TestStartStop/group/embed-certs/serial/Pause 2.38
406 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/no-preload/serial/Pause 2.34
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (17.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-521684 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-521684 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.455632933s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-521684
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-521684: exit status 85 (59.31979ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-521684 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |          |
	|         | -p download-only-521684        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:22
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:22.536730   12032 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:22.536865   12032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:22.536876   12032 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:22.536881   12032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:22.537067   12032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	W0913 23:26:22.537231   12032 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-5233/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-5233/.minikube/config/config.json: no such file or directory
	I0913 23:26:22.537896   12032 out.go:352] Setting JSON to true
	I0913 23:26:22.538920   12032 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":528,"bootTime":1726269454,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:22.539032   12032 start.go:139] virtualization: kvm guest
	I0913 23:26:22.541764   12032 out.go:97] [download-only-521684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:26:22.541907   12032 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:26:22.541966   12032 notify.go:220] Checking for updates...
	I0913 23:26:22.543539   12032 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:22.545160   12032 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:22.546676   12032 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:26:22.548425   12032 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	I0913 23:26:22.550095   12032 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 23:26:22.553203   12032 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:26:22.553560   12032 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:22.577917   12032 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:22.578011   12032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:22.967851   12032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-13 23:26:22.958237323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:22.967971   12032 docker.go:318] overlay module found
	I0913 23:26:22.969844   12032 out.go:97] Using the docker driver based on user configuration
	I0913 23:26:22.969872   12032 start.go:297] selected driver: docker
	I0913 23:26:22.969880   12032 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:22.969970   12032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:23.017872   12032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-13 23:26:23.009274472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:23.018037   12032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:23.018615   12032 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0913 23:26:23.018796   12032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:26:23.020732   12032 out.go:169] Using Docker driver with root privileges
	I0913 23:26:23.022085   12032 cni.go:84] Creating CNI manager for ""
	I0913 23:26:23.022170   12032 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 23:26:23.022232   12032 start.go:340] cluster config:
	{Name:download-only-521684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-521684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:23.023889   12032 out.go:97] Starting "download-only-521684" primary control-plane node in "download-only-521684" cluster
	I0913 23:26:23.023903   12032 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:23.025281   12032 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:23.025301   12032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:23.025341   12032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:23.040812   12032 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:23.040977   12032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:23.041065   12032 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:23.164567   12032 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0913 23:26:23.164596   12032 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:23.164759   12032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:23.166733   12032 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 23:26:23.166760   12032 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:23.271420   12032 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0913 23:26:36.233305   12032 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:36.233410   12032 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:37.005591   12032 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 23:26:37.005917   12032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/download-only-521684/config.json ...
	I0913 23:26:37.005944   12032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/download-only-521684/config.json: {Name:mk558f27975303d3c25510a2f15be4916f32f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:26:37.006121   12032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:37.006326   12032 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19640-5233/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-521684 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521684"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-521684
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-849014 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-849014 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.972249078s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-849014
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-849014: exit status 85 (55.266766ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-521684 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-521684        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-521684        | download-only-521684 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -o=json --download-only        | download-only-849014 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-849014        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:40
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:40.375059   12418 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:40.375300   12418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:40.375308   12418 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:40.375312   12418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:40.375489   12418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:26:40.376043   12418 out.go:352] Setting JSON to true
	I0913 23:26:40.376859   12418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":546,"bootTime":1726269454,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:40.376954   12418 start.go:139] virtualization: kvm guest
	I0913 23:26:40.379022   12418 out.go:97] [download-only-849014] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:26:40.379168   12418 notify.go:220] Checking for updates...
	I0913 23:26:40.380386   12418 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:40.381773   12418 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:40.383191   12418 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:26:40.384535   12418 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	I0913 23:26:40.385778   12418 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 23:26:40.387973   12418 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:26:40.388186   12418 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:40.410674   12418 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:40.410739   12418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:40.458059   12418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 23:26:40.449373303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:40.458163   12418 docker.go:318] overlay module found
	I0913 23:26:40.459803   12418 out.go:97] Using the docker driver based on user configuration
	I0913 23:26:40.459825   12418 start.go:297] selected driver: docker
	I0913 23:26:40.459830   12418 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:40.459904   12418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:40.507038   12418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 23:26:40.498656115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:26:40.507195   12418 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:40.507697   12418 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0913 23:26:40.507860   12418 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:26:40.509629   12418 out.go:169] Using Docker driver with root privileges
	I0913 23:26:40.510669   12418 cni.go:84] Creating CNI manager for ""
	I0913 23:26:40.510730   12418 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:26:40.510742   12418 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:26:40.510810   12418 start.go:340] cluster config:
	{Name:download-only-849014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-849014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:40.511988   12418 out.go:97] Starting "download-only-849014" primary control-plane node in "download-only-849014" cluster
	I0913 23:26:40.512004   12418 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:40.513249   12418 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:40.513281   12418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:40.513401   12418 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:40.528706   12418 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:40.528857   12418 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:40.528877   12418 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0913 23:26:40.528883   12418 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0913 23:26:40.528895   12418 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0913 23:26:40.626502   12418 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0913 23:26:40.626547   12418 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:40.626705   12418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:40.628630   12418 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 23:26:40.628649   12418 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:40.734508   12418 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0913 23:26:49.640236   12418 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:49.640339   12418 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-5233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0913 23:26:50.288580   12418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 23:26:50.288907   12418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/download-only-849014/config.json ...
	I0913 23:26:50.288933   12418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/download-only-849014/config.json: {Name:mk2f0c344b043a0cf677ece678c4011881ffdf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:26:50.289087   12418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:50.289236   12418 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19640-5233/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-849014 host does not exist
	  To start a cluster, run: "minikube start -p download-only-849014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-849014
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-398633 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-398633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-398633
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-717802 --alsologtostderr --binary-mirror http://127.0.0.1:32877 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-717802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-717802
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (76.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-924434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-924434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m13.930734504s)
helpers_test.go:175: Cleaning up "offline-docker-924434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-924434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-924434: (2.191091954s)
--- PASS: TestOffline (76.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-794116
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-794116: exit status 85 (51.772917ms)

                                                
                                                
-- stdout --
	* Profile "addons-794116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-794116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-794116
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-794116: exit status 85 (52.914531ms)

                                                
                                                
-- stdout --
	* Profile "addons-794116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-794116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-794116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-794116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.980656684s)
--- PASS: TestAddons/Setup (212.98s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 9.347079ms
addons_test.go:905: volcano-admission stabilized in 9.395985ms
addons_test.go:913: volcano-controller stabilized in 9.541044ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-nksxk" [1a88fcc3-1ccd-4abc-a5c5-f7f9b19343d8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005913752s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-g269c" [7fe51cfe-ba14-4561-b9c6-47032cbe425e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003552167s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-9kmcv" [bc2166fa-97c6-478b-871d-f0f70632c5de] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003582524s
addons_test.go:932: (dbg) Run:  kubectl --context addons-794116 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-794116 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-794116 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [70bb45b7-8157-4a90-8539-5f746a0025e4] Pending
helpers_test.go:344: "test-job-nginx-0" [70bb45b7-8157-4a90-8539-5f746a0025e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [70bb45b7-8157-4a90-8539-5f746a0025e4] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003527249s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable volcano --alsologtostderr -v=1: (10.341496032s)
--- PASS: TestAddons/serial/Volcano (38.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-794116 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-794116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-794116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-794116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-794116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dc52764e-e875-4384-8cf1-f9d4590e636c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dc52764e-e875-4384-8cf1-f9d4590e636c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00343374s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-794116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable ingress --alsologtostderr -v=1: (7.655615775s)
--- PASS: TestAddons/parallel/Ingress (19.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-drwdl" [027b2376-ad97-49bc-bf43-49c0424e8b0e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.046257738s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-794116
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-794116: (5.756047906s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.285417ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-nvvcp" [0d109fff-d448-40b7-8f31-d74ccc5dc0a2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003346453s
addons_test.go:417: (dbg) Run:  kubectl --context addons-794116 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.78s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.82846ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-cx8kt" [ac846881-5d31-4bc5-8680-f2742edc0d2f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003251339s
addons_test.go:475: (dbg) Run:  kubectl --context addons-794116 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-794116 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.333059476s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.63407ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-794116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-794116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d927f46f-3c85-4276-a097-d43f57a8541b] Pending
helpers_test.go:344: "task-pv-pod" [d927f46f-3c85-4276-a097-d43f57a8541b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d927f46f-3c85-4276-a097-d43f57a8541b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003148947s
addons_test.go:590: (dbg) Run:  kubectl --context addons-794116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-794116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-794116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-794116 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-794116 delete pod task-pv-pod: (1.177609056s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-794116 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-794116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-794116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3771766c-f3fd-4f08-bfe9-576510737196] Pending
helpers_test.go:344: "task-pv-pod-restore" [3771766c-f3fd-4f08-bfe9-576510737196] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3771766c-f3fd-4f08-bfe9-576510737196] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003525173s
addons_test.go:632: (dbg) Run:  kubectl --context addons-794116 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-794116 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-794116 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.502487278s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-794116 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-794116 --alsologtostderr -v=1: (1.007290995s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-ld5bp" [775fd877-8343-4d6a-83e6-08e41131892e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-ld5bp" [775fd877-8343-4d6a-83e6-08e41131892e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003370133s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable headlamp --alsologtostderr -v=1: (5.751665887s)
--- PASS: TestAddons/parallel/Headlamp (18.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8fmbl" [a09067ca-0511-488f-8d49-76db310a683a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003388438s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-794116
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-794116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-794116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c16abce-ca4c-4042-b10d-fc9e9213b80b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c16abce-ca4c-4042-b10d-fc9e9213b80b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c16abce-ca4c-4042-b10d-fc9e9213b80b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004112482s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-794116 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 ssh "cat /opt/local-path-provisioner/pvc-6b1450aa-8425-40e9-a121-ba8dd1de215e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-794116 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-794116 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.402282919s)
--- PASS: TestAddons/parallel/LocalPath (53.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4bdd4" [02179234-f14e-40d1-ad09-9dfc38705284] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004156782s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-794116
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-kfdmz" [0551b545-52dd-47fa-87d6-50502497a6da] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004004511s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-794116 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-794116 addons disable yakd --alsologtostderr -v=1: (5.633520752s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.87s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-794116
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-794116: (5.632808915s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-794116
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-794116
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-794116
--- PASS: TestAddons/StoppedEnableDisable (5.87s)

                                                
                                    
x
+
TestCertOptions (32.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-614656 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-614656 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (29.424480416s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-614656 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-614656 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-614656 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-614656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-614656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-614656: (2.107986245s)
--- PASS: TestCertOptions (32.18s)

                                                
                                    
x
+
TestCertExpiration (239.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-047925 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-047925 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (33.021593234s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-047925 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-047925 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.566192032s)
helpers_test.go:175: Cleaning up "cert-expiration-047925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-047925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-047925: (2.329371869s)
--- PASS: TestCertExpiration (239.92s)

                                                
                                    
x
+
TestDockerFlags (26.97s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-728308 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-728308 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.26188729s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-728308 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-728308 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-728308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-728308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-728308: (2.138882895s)
--- PASS: TestDockerFlags (26.97s)

                                                
                                    
x
+
TestForceSystemdFlag (38.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-965353 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-965353 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.888390222s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-965353 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-965353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-965353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-965353: (2.268245248s)
--- PASS: TestForceSystemdFlag (38.68s)

                                                
                                    
x
+
TestForceSystemdEnv (31.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-106962 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-106962 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.650676278s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-106962 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-106962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-106962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-106962: (2.078754992s)
--- PASS: TestForceSystemdEnv (31.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.74s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.74s)

                                                
                                    
x
+
TestErrorSpam/setup (21.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-044142 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-044142 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-044142 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-044142 --driver=docker  --container-runtime=docker: (21.208136509s)
--- PASS: TestErrorSpam/setup (21.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (10.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 stop: (10.651701572s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-044142 --log_dir /tmp/nospam-044142 stop
--- PASS: TestErrorSpam/stop (10.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-5233/.minikube/files/etc/test/nested/copy/12020/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (33.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-657132 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (33.567235535s)
--- PASS: TestFunctional/serial/StartWithProxy (33.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-657132 --alsologtostderr -v=8: (32.130153192s)
functional_test.go:663: soft start took 32.130926172s for "functional-657132" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-657132 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-657132 /tmp/TestFunctionalserialCacheCmdcacheadd_local1963059864/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache add minikube-local-cache-test:functional-657132
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-657132 cache add minikube-local-cache-test:functional-657132: (1.101465911s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache delete minikube-local-cache-test:functional-657132
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-657132
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.395428ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 kubectl -- --context functional-657132 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-657132 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-657132 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.240904144s)
functional_test.go:761: restart took 40.241019287s for "functional-657132" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-657132 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 logs
--- PASS: TestFunctional/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 logs --file /tmp/TestFunctionalserialLogsFileCmd1273070164/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-657132 logs --file /tmp/TestFunctionalserialLogsFileCmd1273070164/001/logs.txt: (1.006372164s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-657132 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-657132
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-657132: exit status 115 (327.987535ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30596 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-657132 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-657132 delete -f testdata/invalidsvc.yaml: (1.247593401s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 config get cpus: exit status 14 (73.554954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 config get cpus: exit status 14 (53.622044ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-657132 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-657132 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 63106: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-657132 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (208.131961ms)

                                                
                                                
-- stdout --
	* [functional-657132] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:43:07.002209   62050 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:43:07.002395   62050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:43:07.002407   62050 out.go:358] Setting ErrFile to fd 2...
	I0913 23:43:07.002414   62050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:43:07.002743   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:43:07.003485   62050 out.go:352] Setting JSON to false
	I0913 23:43:07.004829   62050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1533,"bootTime":1726269454,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:43:07.004923   62050 start.go:139] virtualization: kvm guest
	I0913 23:43:07.007987   62050 out.go:177] * [functional-657132] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:43:07.010013   62050 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:43:07.010012   62050 notify.go:220] Checking for updates...
	I0913 23:43:07.011743   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:43:07.013418   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:43:07.014995   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	I0913 23:43:07.016670   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:43:07.018164   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:43:07.020196   62050 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:43:07.020958   62050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:43:07.050492   62050 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:43:07.050633   62050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:43:07.124330   62050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-13 23:43:07.113252107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:43:07.124476   62050 docker.go:318] overlay module found
	I0913 23:43:07.127605   62050 out.go:177] * Using the docker driver based on existing profile
	I0913 23:43:07.128951   62050 start.go:297] selected driver: docker
	I0913 23:43:07.128973   62050 start.go:901] validating driver "docker" against &{Name:functional-657132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-657132 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:43:07.129106   62050 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:43:07.131777   62050 out.go:201] 
	W0913 23:43:07.133282   62050 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 23:43:07.134914   62050 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657132 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-657132 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (217.254315ms)

                                                
                                                
-- stdout --
	* [functional-657132] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:43:07.244101   62207 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:43:07.244346   62207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:43:07.244389   62207 out.go:358] Setting ErrFile to fd 2...
	I0913 23:43:07.244410   62207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:43:07.244922   62207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:43:07.247984   62207 out.go:352] Setting JSON to false
	I0913 23:43:07.249135   62207 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1533,"bootTime":1726269454,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:43:07.249211   62207 start.go:139] virtualization: kvm guest
	I0913 23:43:07.252034   62207 out.go:177] * [functional-657132] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0913 23:43:07.253836   62207 notify.go:220] Checking for updates...
	I0913 23:43:07.255993   62207 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:43:07.257332   62207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:43:07.258782   62207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	I0913 23:43:07.260164   62207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	I0913 23:43:07.261650   62207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:43:07.263092   62207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:43:07.265511   62207 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:43:07.267092   62207 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:43:07.307808   62207 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:43:07.307903   62207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:43:07.379971   62207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-13 23:43:07.368315409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:43:07.380094   62207 docker.go:318] overlay module found
	I0913 23:43:07.383399   62207 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0913 23:43:07.384968   62207 start.go:297] selected driver: docker
	I0913 23:43:07.384989   62207 start.go:901] validating driver "docker" against &{Name:functional-657132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-657132 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:43:07.385105   62207 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:43:07.387876   62207 out.go:201] 
	W0913 23:43:07.389874   62207 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 23:43:07.391339   62207 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-657132 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-657132 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tdwqz" [ac24ceba-16d2-4007-982f-e111d4eaf012] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tdwqz" [ac24ceba-16d2-4007-982f-e111d4eaf012] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003813549s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31791
functional_test.go:1675: http://192.168.49.2:31791: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-tdwqz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31791
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4990dc0a-d8d7-4ad1-bc49-1f463b08b8cb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.07167757s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-657132 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-657132 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-657132 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657132 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [943c0d8e-9c3a-4911-b5fa-889032573d6c] Pending
helpers_test.go:344: "sp-pod" [943c0d8e-9c3a-4911-b5fa-889032573d6c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [943c0d8e-9c3a-4911-b5fa-889032573d6c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.00332563s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-657132 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-657132 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657132 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [11e7d9e0-1ed3-471c-a163-52f1f9f5230a] Pending
helpers_test.go:344: "sp-pod" [11e7d9e0-1ed3-471c-a163-52f1f9f5230a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [11e7d9e0-1ed3-471c-a163-52f1f9f5230a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003306553s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-657132 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh -n functional-657132 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cp functional-657132:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd380910353/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh -n functional-657132 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh -n functional-657132 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-657132 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-564dg" [c9d85f0a-e3d2-4f14-9470-ab8096f0a235] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-564dg" [c9d85f0a-e3d2-4f14-9470-ab8096f0a235] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.005162875s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;": exit status 1 (223.146612ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;": exit status 1 (111.51486ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;": exit status 1 (108.193279ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657132 exec mysql-6cdb49bbb-564dg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12020/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /etc/test/nested/copy/12020/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /etc/ssl/certs/12020.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /usr/share/ca-certificates/12020.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/120202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /etc/ssl/certs/120202.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/120202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /usr/share/ca-certificates/120202.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-657132 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh "sudo systemctl is-active crio": exit status 1 (336.503406ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-657132 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-657132 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-djc8f" [61dc97ea-4642-40f4-bda0-04babd88cbef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-djc8f" [61dc97ea-4642-40f4-bda0-04babd88cbef] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004321792s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "350.770782ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "72.209985ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdany-port4127781575/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726270986255972757" to /tmp/TestFunctionalparallelMountCmdany-port4127781575/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726270986255972757" to /tmp/TestFunctionalparallelMountCmdany-port4127781575/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726270986255972757" to /tmp/TestFunctionalparallelMountCmdany-port4127781575/001/test-1726270986255972757
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (396.949676ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 23:43 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 23:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 23:43 test-1726270986255972757
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh cat /mount-9p/test-1726270986255972757
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-657132 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ef82654d-1f5d-483a-bd8d-27aebbc425a9] Pending
helpers_test.go:344: "busybox-mount" [ef82654d-1f5d-483a-bd8d-27aebbc425a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ef82654d-1f5d-483a-bd8d-27aebbc425a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ef82654d-1f5d-483a-bd8d-27aebbc425a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003162741s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-657132 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdany-port4127781575/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "508.00832ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.193038ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657132 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-657132
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-657132
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657132 image ls --format short --alsologtostderr:
I0913 23:43:32.706967   68780 out.go:345] Setting OutFile to fd 1 ...
I0913 23:43:32.707548   68780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:32.707568   68780 out.go:358] Setting ErrFile to fd 2...
I0913 23:43:32.707577   68780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:32.708055   68780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
I0913 23:43:32.709187   68780 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:32.709401   68780 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:32.709898   68780 cli_runner.go:164] Run: docker container inspect functional-657132 --format={{.State.Status}}
I0913 23:43:32.727908   68780 ssh_runner.go:195] Run: systemctl --version
I0913 23:43:32.727954   68780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657132
I0913 23:43:32.746010   68780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/functional-657132/id_rsa Username:docker}
I0913 23:43:32.846853   68780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657132 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-657132 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-657132 | 163982ec65e6b | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657132 image ls --format table --alsologtostderr:
I0913 23:43:33.199750   68879 out.go:345] Setting OutFile to fd 1 ...
I0913 23:43:33.199867   68879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.199878   68879 out.go:358] Setting ErrFile to fd 2...
I0913 23:43:33.199884   68879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.200170   68879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
I0913 23:43:33.200875   68879 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.200979   68879 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.201360   68879 cli_runner.go:164] Run: docker container inspect functional-657132 --format={{.State.Status}}
I0913 23:43:33.219157   68879 ssh_runner.go:195] Run: systemctl --version
I0913 23:43:33.219217   68879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657132
I0913 23:43:33.238545   68879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/functional-657132/id_rsa Username:docker}
I0913 23:43:33.329857   68879 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657132 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3
bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/lib
rary/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-657132"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"163982ec65e6bf85679e4bd7e8efd90954f8ff6feb827fa64449d967d1dac930","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-657132"],"size":"30"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657132 image ls --format json --alsologtostderr:
I0913 23:43:32.922433   68828 out.go:345] Setting OutFile to fd 1 ...
I0913 23:43:32.922677   68828 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:32.922686   68828 out.go:358] Setting ErrFile to fd 2...
I0913 23:43:32.922690   68828 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:32.922867   68828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
I0913 23:43:32.923413   68828 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:32.923515   68828 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:32.923857   68828 cli_runner.go:164] Run: docker container inspect functional-657132 --format={{.State.Status}}
I0913 23:43:32.940662   68828 ssh_runner.go:195] Run: systemctl --version
I0913 23:43:32.940702   68828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657132
I0913 23:43:32.966612   68828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/functional-657132/id_rsa Username:docker}
I0913 23:43:33.118618   68828 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657132 image ls --format yaml --alsologtostderr:
- id: 163982ec65e6bf85679e4bd7e8efd90954f8ff6feb827fa64449d967d1dac930
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-657132
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-657132
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657132 image ls --format yaml --alsologtostderr:
I0913 23:43:33.407118   68946 out.go:345] Setting OutFile to fd 1 ...
I0913 23:43:33.407359   68946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.407367   68946 out.go:358] Setting ErrFile to fd 2...
I0913 23:43:33.407371   68946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.407550   68946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
I0913 23:43:33.408212   68946 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.408354   68946 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.408879   68946 cli_runner.go:164] Run: docker container inspect functional-657132 --format={{.State.Status}}
I0913 23:43:33.427832   68946 ssh_runner.go:195] Run: systemctl --version
I0913 23:43:33.427880   68946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657132
I0913 23:43:33.445669   68946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/functional-657132/id_rsa Username:docker}
I0913 23:43:33.537884   68946 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh pgrep buildkitd: exit status 1 (247.437606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image build -t localhost/my-image:functional-657132 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-657132 image build -t localhost/my-image:functional-657132 testdata/build --alsologtostderr: (3.862571191s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657132 image build -t localhost/my-image:functional-657132 testdata/build --alsologtostderr:
I0913 23:43:33.854648   69157 out.go:345] Setting OutFile to fd 1 ...
I0913 23:43:33.854940   69157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.854950   69157 out.go:358] Setting ErrFile to fd 2...
I0913 23:43:33.854954   69157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:43:33.855178   69157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
I0913 23:43:33.855798   69157 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.856324   69157 config.go:182] Loaded profile config "functional-657132": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:43:33.856753   69157 cli_runner.go:164] Run: docker container inspect functional-657132 --format={{.State.Status}}
I0913 23:43:33.874131   69157 ssh_runner.go:195] Run: systemctl --version
I0913 23:43:33.874189   69157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657132
I0913 23:43:33.890608   69157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/functional-657132/id_rsa Username:docker}
I0913 23:43:33.985727   69157 build_images.go:161] Building image from path: /tmp/build.1120104810.tar
I0913 23:43:33.985786   69157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 23:43:33.994508   69157 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1120104810.tar
I0913 23:43:33.997977   69157 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1120104810.tar: stat -c "%s %y" /var/lib/minikube/build/build.1120104810.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1120104810.tar': No such file or directory
I0913 23:43:33.998007   69157 ssh_runner.go:362] scp /tmp/build.1120104810.tar --> /var/lib/minikube/build/build.1120104810.tar (3072 bytes)
I0913 23:43:34.020150   69157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1120104810
I0913 23:43:34.028223   69157 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1120104810 -xf /var/lib/minikube/build/build.1120104810.tar
I0913 23:43:34.037008   69157 docker.go:360] Building image: /var/lib/minikube/build/build.1120104810
I0913 23:43:34.037078   69157 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-657132 /var/lib/minikube/build/build.1120104810
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c69567443f8b603b787217a019cfc42f54dd0fb95ba38598ea25c006b55c92a4 done
#8 naming to localhost/my-image:functional-657132 done
#8 DONE 0.1s
I0913 23:43:37.646052   69157 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-657132 /var/lib/minikube/build/build.1120104810: (3.608950174s)
I0913 23:43:37.646127   69157 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1120104810
I0913 23:43:37.658133   69157 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1120104810.tar
I0913 23:43:37.669202   69157 build_images.go:217] Built localhost/my-image:functional-657132 from /tmp/build.1120104810.tar
I0913 23:43:37.669243   69157 build_images.go:133] succeeded building to: functional-657132
I0913 23:43:37.669249   69157 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.946500589s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-657132
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image load --daemon kicbase/echo-server:functional-657132 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image load --daemon kicbase/echo-server:functional-657132 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-657132
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image load --daemon kicbase/echo-server:functional-657132 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image save kicbase/echo-server:functional-657132 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image rm kicbase/echo-server:functional-657132 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdspecific-port2576858163/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.611943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdspecific-port2576858163/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657132 ssh "sudo umount -f /mount-9p": exit status 1 (371.200355ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-657132 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdspecific-port2576858163/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-657132
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 image save --daemon kicbase/echo-server:functional-657132 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-657132
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service list -o json
functional_test.go:1494: Took "557.740554ms" to run "out/minikube-linux-amd64 -p functional-657132 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31402
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31402
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-657132 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657132 /tmp/TestFunctionalparallelMountCmdVerifyCleanup367547274/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-657132 docker-env) && out/minikube-linux-amd64 status -p functional-657132"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-657132 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657132 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67522: os: process already finished
helpers_test.go:502: unable to terminate pid 67219: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-657132 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fa246c53-b5ab-4d13-9498-0c5086af6f4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2024/09/13 23:43:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "nginx-svc" [fa246c53-b5ab-4d13-9498-0c5086af6f4e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.003743932s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-657132 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.7.128 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-657132 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-657132
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-657132
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-657132
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-478975 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 23:45:26.674703   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.681734   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.693150   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.714552   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.755961   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.837365   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:26.999665   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:27.321340   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:27.963490   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:29.245682   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:31.807631   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-478975 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.954659546s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0913 23:45:36.929743   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-478975 -- rollout status deployment/busybox: (3.797202302s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-fsmmd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-l7mkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-rnrld -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-fsmmd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-l7mkg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-rnrld -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-fsmmd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-l7mkg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-rnrld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-fsmmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-fsmmd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-l7mkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-l7mkg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-rnrld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-478975 -- exec busybox-7dff88458-rnrld -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-478975 -v=7 --alsologtostderr
E0913 23:45:47.171920   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-478975 -v=7 --alsologtostderr: (19.629547118s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-478975 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp testdata/cp-test.txt ha-478975:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile298958849/001/cp-test_ha-478975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975:/home/docker/cp-test.txt ha-478975-m02:/home/docker/cp-test_ha-478975_ha-478975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test_ha-478975_ha-478975-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975:/home/docker/cp-test.txt ha-478975-m03:/home/docker/cp-test_ha-478975_ha-478975-m03.txt
E0913 23:46:07.653367   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test_ha-478975_ha-478975-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975:/home/docker/cp-test.txt ha-478975-m04:/home/docker/cp-test_ha-478975_ha-478975-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test_ha-478975_ha-478975-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp testdata/cp-test.txt ha-478975-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile298958849/001/cp-test_ha-478975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m02:/home/docker/cp-test.txt ha-478975:/home/docker/cp-test_ha-478975-m02_ha-478975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test_ha-478975-m02_ha-478975.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m02:/home/docker/cp-test.txt ha-478975-m03:/home/docker/cp-test_ha-478975-m02_ha-478975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test_ha-478975-m02_ha-478975-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m02:/home/docker/cp-test.txt ha-478975-m04:/home/docker/cp-test_ha-478975-m02_ha-478975-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test_ha-478975-m02_ha-478975-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp testdata/cp-test.txt ha-478975-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile298958849/001/cp-test_ha-478975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m03:/home/docker/cp-test.txt ha-478975:/home/docker/cp-test_ha-478975-m03_ha-478975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test_ha-478975-m03_ha-478975.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m03:/home/docker/cp-test.txt ha-478975-m02:/home/docker/cp-test_ha-478975-m03_ha-478975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test_ha-478975-m03_ha-478975-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m03:/home/docker/cp-test.txt ha-478975-m04:/home/docker/cp-test_ha-478975-m03_ha-478975-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test_ha-478975-m03_ha-478975-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp testdata/cp-test.txt ha-478975-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile298958849/001/cp-test_ha-478975-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m04:/home/docker/cp-test.txt ha-478975:/home/docker/cp-test_ha-478975-m04_ha-478975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975 "sudo cat /home/docker/cp-test_ha-478975-m04_ha-478975.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m04:/home/docker/cp-test.txt ha-478975-m02:/home/docker/cp-test_ha-478975-m04_ha-478975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m02 "sudo cat /home/docker/cp-test_ha-478975-m04_ha-478975-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 cp ha-478975-m04:/home/docker/cp-test.txt ha-478975-m03:/home/docker/cp-test_ha-478975-m04_ha-478975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 ssh -n ha-478975-m03 "sudo cat /home/docker/cp-test_ha-478975-m04_ha-478975-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-478975 node stop m02 -v=7 --alsologtostderr: (10.695735035s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr: exit status 7 (656.527365ms)

                                                
                                                
-- stdout --
	ha-478975
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-478975-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-478975-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-478975-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:46:31.221658   97024 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:46:31.221809   97024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:31.221819   97024 out.go:358] Setting ErrFile to fd 2...
	I0913 23:46:31.221827   97024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:31.222060   97024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:46:31.222251   97024 out.go:352] Setting JSON to false
	I0913 23:46:31.222280   97024 mustload.go:65] Loading cluster: ha-478975
	I0913 23:46:31.222409   97024 notify.go:220] Checking for updates...
	I0913 23:46:31.222868   97024 config.go:182] Loaded profile config "ha-478975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:46:31.222890   97024 status.go:255] checking status of ha-478975 ...
	I0913 23:46:31.223391   97024 cli_runner.go:164] Run: docker container inspect ha-478975 --format={{.State.Status}}
	I0913 23:46:31.243063   97024 status.go:330] ha-478975 host status = "Running" (err=<nil>)
	I0913 23:46:31.243085   97024 host.go:66] Checking if "ha-478975" exists ...
	I0913 23:46:31.243355   97024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-478975
	I0913 23:46:31.262245   97024 host.go:66] Checking if "ha-478975" exists ...
	I0913 23:46:31.262504   97024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:46:31.262542   97024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-478975
	I0913 23:46:31.282222   97024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/ha-478975/id_rsa Username:docker}
	I0913 23:46:31.374615   97024 ssh_runner.go:195] Run: systemctl --version
	I0913 23:46:31.378613   97024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:46:31.389317   97024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:46:31.437062   97024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-13 23:46:31.427190281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0913 23:46:31.437703   97024 kubeconfig.go:125] found "ha-478975" server: "https://192.168.49.254:8443"
	I0913 23:46:31.437737   97024 api_server.go:166] Checking apiserver status ...
	I0913 23:46:31.437789   97024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:46:31.448592   97024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2409/cgroup
	I0913 23:46:31.457479   97024 api_server.go:182] apiserver freezer: "12:freezer:/docker/7f1003da0f3b16dc28e404d75812d3d59ec27e00a463e6e890385622f107e832/kubepods/burstable/pod7c2c44190e291de318f792d779e3935c/16b408a0e882e585039b4ae64ac1332280f9e2cff281a47d99b24c332df5b373"
	I0913 23:46:31.457604   97024 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f1003da0f3b16dc28e404d75812d3d59ec27e00a463e6e890385622f107e832/kubepods/burstable/pod7c2c44190e291de318f792d779e3935c/16b408a0e882e585039b4ae64ac1332280f9e2cff281a47d99b24c332df5b373/freezer.state
	I0913 23:46:31.465536   97024 api_server.go:204] freezer state: "THAWED"
	I0913 23:46:31.465573   97024 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 23:46:31.469223   97024 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 23:46:31.469247   97024 status.go:422] ha-478975 apiserver status = Running (err=<nil>)
	I0913 23:46:31.469259   97024 status.go:257] ha-478975 status: &{Name:ha-478975 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:46:31.469277   97024 status.go:255] checking status of ha-478975-m02 ...
	I0913 23:46:31.469512   97024 cli_runner.go:164] Run: docker container inspect ha-478975-m02 --format={{.State.Status}}
	I0913 23:46:31.489334   97024 status.go:330] ha-478975-m02 host status = "Stopped" (err=<nil>)
	I0913 23:46:31.489388   97024 status.go:343] host is not running, skipping remaining checks
	I0913 23:46:31.489398   97024 status.go:257] ha-478975-m02 status: &{Name:ha-478975-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:46:31.489424   97024 status.go:255] checking status of ha-478975-m03 ...
	I0913 23:46:31.489830   97024 cli_runner.go:164] Run: docker container inspect ha-478975-m03 --format={{.State.Status}}
	I0913 23:46:31.508257   97024 status.go:330] ha-478975-m03 host status = "Running" (err=<nil>)
	I0913 23:46:31.508281   97024 host.go:66] Checking if "ha-478975-m03" exists ...
	I0913 23:46:31.508591   97024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-478975-m03
	I0913 23:46:31.526204   97024 host.go:66] Checking if "ha-478975-m03" exists ...
	I0913 23:46:31.526547   97024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:46:31.526591   97024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-478975-m03
	I0913 23:46:31.544537   97024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/ha-478975-m03/id_rsa Username:docker}
	I0913 23:46:31.634426   97024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:46:31.645139   97024 kubeconfig.go:125] found "ha-478975" server: "https://192.168.49.254:8443"
	I0913 23:46:31.645165   97024 api_server.go:166] Checking apiserver status ...
	I0913 23:46:31.645207   97024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:46:31.655854   97024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2251/cgroup
	I0913 23:46:31.664929   97024 api_server.go:182] apiserver freezer: "12:freezer:/docker/6fc16b389b6ca421012ba0522b1aaeef8bdbe7c5ed63c75d843a50b0c31a6a14/kubepods/burstable/pod08af4ad257172d48d7e534d6b3e506b4/1ded8760f507f30e957c9919a834ad34fa9770ba7c3890dd21e44158631e15cd"
	I0913 23:46:31.664991   97024 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6fc16b389b6ca421012ba0522b1aaeef8bdbe7c5ed63c75d843a50b0c31a6a14/kubepods/burstable/pod08af4ad257172d48d7e534d6b3e506b4/1ded8760f507f30e957c9919a834ad34fa9770ba7c3890dd21e44158631e15cd/freezer.state
	I0913 23:46:31.673207   97024 api_server.go:204] freezer state: "THAWED"
	I0913 23:46:31.673247   97024 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 23:46:31.677724   97024 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 23:46:31.677760   97024 status.go:422] ha-478975-m03 apiserver status = Running (err=<nil>)
	I0913 23:46:31.677769   97024 status.go:257] ha-478975-m03 status: &{Name:ha-478975-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:46:31.677802   97024 status.go:255] checking status of ha-478975-m04 ...
	I0913 23:46:31.678058   97024 cli_runner.go:164] Run: docker container inspect ha-478975-m04 --format={{.State.Status}}
	I0913 23:46:31.695816   97024 status.go:330] ha-478975-m04 host status = "Running" (err=<nil>)
	I0913 23:46:31.695842   97024 host.go:66] Checking if "ha-478975-m04" exists ...
	I0913 23:46:31.696152   97024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-478975-m04
	I0913 23:46:31.713707   97024 host.go:66] Checking if "ha-478975-m04" exists ...
	I0913 23:46:31.713964   97024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:46:31.714001   97024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-478975-m04
	I0913 23:46:31.732033   97024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/ha-478975-m04/id_rsa Username:docker}
	I0913 23:46:31.822288   97024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:46:31.833323   97024 status.go:257] ha-478975-m04 status: &{Name:ha-478975-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 node start m02 -v=7 --alsologtostderr
E0913 23:46:48.615663   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-478975 node start m02 -v=7 --alsologtostderr: (32.920973242s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.256669606s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-478975 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-478975 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-478975 -v=7 --alsologtostderr: (33.837380678s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-478975 --wait=true -v=7 --alsologtostderr
E0913 23:48:05.347281   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.353774   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.365208   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.386618   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.428023   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.509438   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.670952   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:05.992628   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:06.634649   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:07.916611   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:10.478736   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:10.537256   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:15.600609   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:25.842755   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:48:46.324116   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:27.285668   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-478975 --wait=true -v=7 --alsologtostderr: (2m23.374849998s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-478975
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 node delete m03 -v=7 --alsologtostderr
E0913 23:50:26.674514   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-478975 node delete m03 -v=7 --alsologtostderr: (8.544184393s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 stop -v=7 --alsologtostderr
E0913 23:50:49.207057   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:54.379608   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-478975 stop -v=7 --alsologtostderr: (32.478372876s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr: exit status 7 (95.491474ms)

                                                
                                                
-- stdout --
	ha-478975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-478975-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-478975-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:51:02.011183  126446 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:51:02.011433  126446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:51:02.011444  126446 out.go:358] Setting ErrFile to fd 2...
	I0913 23:51:02.011448  126446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:51:02.011668  126446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0913 23:51:02.011875  126446 out.go:352] Setting JSON to false
	I0913 23:51:02.011912  126446 mustload.go:65] Loading cluster: ha-478975
	I0913 23:51:02.012003  126446 notify.go:220] Checking for updates...
	I0913 23:51:02.012402  126446 config.go:182] Loaded profile config "ha-478975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:51:02.012419  126446 status.go:255] checking status of ha-478975 ...
	I0913 23:51:02.012828  126446 cli_runner.go:164] Run: docker container inspect ha-478975 --format={{.State.Status}}
	I0913 23:51:02.030078  126446 status.go:330] ha-478975 host status = "Stopped" (err=<nil>)
	I0913 23:51:02.030099  126446 status.go:343] host is not running, skipping remaining checks
	I0913 23:51:02.030105  126446 status.go:257] ha-478975 status: &{Name:ha-478975 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:51:02.030142  126446 status.go:255] checking status of ha-478975-m02 ...
	I0913 23:51:02.030394  126446 cli_runner.go:164] Run: docker container inspect ha-478975-m02 --format={{.State.Status}}
	I0913 23:51:02.046767  126446 status.go:330] ha-478975-m02 host status = "Stopped" (err=<nil>)
	I0913 23:51:02.046793  126446 status.go:343] host is not running, skipping remaining checks
	I0913 23:51:02.046801  126446 status.go:257] ha-478975-m02 status: &{Name:ha-478975-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:51:02.046825  126446 status.go:255] checking status of ha-478975-m04 ...
	I0913 23:51:02.047077  126446 cli_runner.go:164] Run: docker container inspect ha-478975-m04 --format={{.State.Status}}
	I0913 23:51:02.064316  126446 status.go:330] ha-478975-m04 host status = "Stopped" (err=<nil>)
	I0913 23:51:02.064359  126446 status.go:343] host is not running, skipping remaining checks
	I0913 23:51:02.064367  126446 status.go:257] ha-478975-m04 status: &{Name:ha-478975-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-478975 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-478975 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m19.330460713s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-478975 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-478975 --control-plane -v=7 --alsologtostderr: (37.803015651s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-478975 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-942182 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-942182 --driver=docker  --container-runtime=docker: (21.354217976s)
--- PASS: TestImageBuild/serial/Setup (21.35s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-942182
E0913 23:53:33.048449   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-942182: (2.419155171s)
--- PASS: TestImageBuild/serial/NormalBuild (2.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-942182
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-942182
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-942182
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-563551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-563551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m5.678265142s)
--- PASS: TestJSONOutput/start/Command (65.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-563551 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-563551 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-563551 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-563551 --output=json --user=testUser: (10.709464017s)
--- PASS: TestJSONOutput/stop/Command (10.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-769726 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-769726 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.246253ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"200eb676-5573-4a45-a969-0601ee1b7535","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-769726] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"985fbf1c-eb5c-4339-936c-f8375a4e6d22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"3041ae3a-f566-450e-ba6c-d06b2966d0db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"833173a4-4776-4e17-b313-a57452329ce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig"}}
	{"specversion":"1.0","id":"fb72bba3-1608-4336-9244-879431f66611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube"}}
	{"specversion":"1.0","id":"e2fd13c2-d6a8-4f90-8217-267b8f92e122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"268d2dbc-52c1-4b01-97b4-02a2ef4854ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"176fc671-3fbb-46af-b04f-24f40096ee21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-769726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-769726
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-887844 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-887844 --network=: (22.451786006s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-887844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-887844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-887844: (2.053712236s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-481056 --network=bridge
E0913 23:55:26.674611   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-481056 --network=bridge: (22.223328962s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-481056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-481056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-481056: (1.850467648s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.09s)

                                                
                                    
x
+
TestKicExistingNetwork (23.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-288666 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-288666 --network=existing-network: (21.47005168s)
helpers_test.go:175: Cleaning up "existing-network-288666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-288666
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-288666: (1.834125275s)
--- PASS: TestKicExistingNetwork (23.45s)

                                                
                                    
x
+
TestKicCustomSubnet (25.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-431657 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-431657 --subnet=192.168.60.0/24: (23.580995264s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-431657 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-431657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-431657
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-431657: (2.026410868s)
--- PASS: TestKicCustomSubnet (25.63s)

                                                
                                    
x
+
TestKicStaticIP (23.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-609583 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-609583 --static-ip=192.168.200.200: (21.633714272s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-609583 ip
helpers_test.go:175: Cleaning up "static-ip-609583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-609583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-609583: (1.947235285s)
--- PASS: TestKicStaticIP (23.70s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (54.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-153603 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-153603 --driver=docker  --container-runtime=docker: (24.314603333s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-162517 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-162517 --driver=docker  --container-runtime=docker: (24.769636569s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-153603
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-162517
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-162517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-162517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-162517: (2.067070959s)
helpers_test.go:175: Cleaning up "first-153603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-153603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-153603: (2.056247156s)
--- PASS: TestMinikubeProfile (54.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-415025 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-415025 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.806822493s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-415025 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-426443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0913 23:58:05.347909   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-426443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.171316235s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-426443 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-415025 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-415025 --alsologtostderr -v=5: (1.467621688s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-426443 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-426443
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-426443: (1.173816236s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-426443
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-426443: (7.724993527s)
--- PASS: TestMountStart/serial/RestartStopped (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-426443 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (55.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136805 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136805 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.990278312s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (55.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-136805 -- rollout status deployment/busybox: (3.262115653s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-757md -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-v9m98 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-757md -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-v9m98 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-757md -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-v9m98 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-757md -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-757md -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-v9m98 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136805 -- exec busybox-7dff88458-v9m98 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-136805 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-136805 -v 3 --alsologtostderr: (18.27066993s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-136805 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp testdata/cp-test.txt multinode-136805:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2947932283/001/cp-test_multinode-136805.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805:/home/docker/cp-test.txt multinode-136805-m02:/home/docker/cp-test_multinode-136805_multinode-136805-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test_multinode-136805_multinode-136805-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805:/home/docker/cp-test.txt multinode-136805-m03:/home/docker/cp-test_multinode-136805_multinode-136805-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test_multinode-136805_multinode-136805-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp testdata/cp-test.txt multinode-136805-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2947932283/001/cp-test_multinode-136805-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m02:/home/docker/cp-test.txt multinode-136805:/home/docker/cp-test_multinode-136805-m02_multinode-136805.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test.txt"
E0914 00:00:26.673704   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test_multinode-136805-m02_multinode-136805.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m02:/home/docker/cp-test.txt multinode-136805-m03:/home/docker/cp-test_multinode-136805-m02_multinode-136805-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test_multinode-136805-m02_multinode-136805-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp testdata/cp-test.txt multinode-136805-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2947932283/001/cp-test_multinode-136805-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m03:/home/docker/cp-test.txt multinode-136805:/home/docker/cp-test_multinode-136805-m03_multinode-136805.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805 "sudo cat /home/docker/cp-test_multinode-136805-m03_multinode-136805.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 cp multinode-136805-m03:/home/docker/cp-test.txt multinode-136805-m02:/home/docker/cp-test_multinode-136805-m03_multinode-136805-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 ssh -n multinode-136805-m02 "sudo cat /home/docker/cp-test_multinode-136805-m03_multinode-136805-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-136805 node stop m03: (1.199578521s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136805 status: exit status 7 (506.871334ms)

                                                
                                                
-- stdout --
	multinode-136805
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136805-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136805-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr: exit status 7 (494.172348ms)

                                                
                                                
-- stdout --
	multinode-136805
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136805-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136805-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:32.811551  213241 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:32.811805  213241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:32.811814  213241 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:32.811818  213241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:32.812048  213241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0914 00:00:32.812221  213241 out.go:352] Setting JSON to false
	I0914 00:00:32.812252  213241 mustload.go:65] Loading cluster: multinode-136805
	I0914 00:00:32.812327  213241 notify.go:220] Checking for updates...
	I0914 00:00:32.812739  213241 config.go:182] Loaded profile config "multinode-136805": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 00:00:32.812760  213241 status.go:255] checking status of multinode-136805 ...
	I0914 00:00:32.813178  213241 cli_runner.go:164] Run: docker container inspect multinode-136805 --format={{.State.Status}}
	I0914 00:00:32.833707  213241 status.go:330] multinode-136805 host status = "Running" (err=<nil>)
	I0914 00:00:32.833760  213241 host.go:66] Checking if "multinode-136805" exists ...
	I0914 00:00:32.834059  213241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136805
	I0914 00:00:32.852625  213241 host.go:66] Checking if "multinode-136805" exists ...
	I0914 00:00:32.852999  213241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:32.853055  213241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136805
	I0914 00:00:32.873720  213241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/multinode-136805/id_rsa Username:docker}
	I0914 00:00:32.966862  213241 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:32.971166  213241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:32.982368  213241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:00:33.037147  213241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-14 00:00:33.027200729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0914 00:00:33.038073  213241 kubeconfig.go:125] found "multinode-136805" server: "https://192.168.67.2:8443"
	I0914 00:00:33.038121  213241 api_server.go:166] Checking apiserver status ...
	I0914 00:00:33.038173  213241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:33.050022  213241 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2298/cgroup
	I0914 00:00:33.060153  213241 api_server.go:182] apiserver freezer: "12:freezer:/docker/8562649114f5c9f8ac497c96bcb8fbe4cfaa28b14116dd867a28bb8f6d593279/kubepods/burstable/pod420d5012af4b7e574d1e69ad4e79253c/20cae37760a5ef3ddf72ef675303172d95d2f707d4c972cac8d9eeaa92af168b"
	I0914 00:00:33.060225  213241 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8562649114f5c9f8ac497c96bcb8fbe4cfaa28b14116dd867a28bb8f6d593279/kubepods/burstable/pod420d5012af4b7e574d1e69ad4e79253c/20cae37760a5ef3ddf72ef675303172d95d2f707d4c972cac8d9eeaa92af168b/freezer.state
	I0914 00:00:33.068721  213241 api_server.go:204] freezer state: "THAWED"
	I0914 00:00:33.068756  213241 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 00:00:33.073391  213241 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 00:00:33.073419  213241 status.go:422] multinode-136805 apiserver status = Running (err=<nil>)
	I0914 00:00:33.073428  213241 status.go:257] multinode-136805 status: &{Name:multinode-136805 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:33.073443  213241 status.go:255] checking status of multinode-136805-m02 ...
	I0914 00:00:33.073755  213241 cli_runner.go:164] Run: docker container inspect multinode-136805-m02 --format={{.State.Status}}
	I0914 00:00:33.092473  213241 status.go:330] multinode-136805-m02 host status = "Running" (err=<nil>)
	I0914 00:00:33.092497  213241 host.go:66] Checking if "multinode-136805-m02" exists ...
	I0914 00:00:33.092799  213241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136805-m02
	I0914 00:00:33.110995  213241 host.go:66] Checking if "multinode-136805-m02" exists ...
	I0914 00:00:33.111287  213241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:33.111331  213241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136805-m02
	I0914 00:00:33.132787  213241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19640-5233/.minikube/machines/multinode-136805-m02/id_rsa Username:docker}
	I0914 00:00:33.226776  213241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:33.238463  213241 status.go:257] multinode-136805-m02 status: &{Name:multinode-136805-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:33.238511  213241 status.go:255] checking status of multinode-136805-m03 ...
	I0914 00:00:33.238774  213241 cli_runner.go:164] Run: docker container inspect multinode-136805-m03 --format={{.State.Status}}
	I0914 00:00:33.258504  213241 status.go:330] multinode-136805-m03 host status = "Stopped" (err=<nil>)
	I0914 00:00:33.258527  213241 status.go:343] host is not running, skipping remaining checks
	I0914 00:00:33.258533  213241 status.go:257] multinode-136805-m03 status: &{Name:multinode-136805-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-136805 node start m03 -v=7 --alsologtostderr: (9.872731265s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136805
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-136805
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-136805: (22.53773729s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136805 --wait=true -v=8 --alsologtostderr
E0914 00:01:49.741726   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136805 --wait=true -v=8 --alsologtostderr: (1m16.069227188s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136805
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-136805 node delete m03: (4.620432893s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-136805 stop: (21.381415227s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136805 status: exit status 7 (89.62013ms)

                                                
                                                
-- stdout --
	multinode-136805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136805-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr: exit status 7 (81.174805ms)

                                                
                                                
-- stdout --
	multinode-136805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136805-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:02:49.228762  228939 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:02:49.228880  228939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:02:49.228892  228939 out.go:358] Setting ErrFile to fd 2...
	I0914 00:02:49.228896  228939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:02:49.229101  228939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5233/.minikube/bin
	I0914 00:02:49.229261  228939 out.go:352] Setting JSON to false
	I0914 00:02:49.229290  228939 mustload.go:65] Loading cluster: multinode-136805
	I0914 00:02:49.229402  228939 notify.go:220] Checking for updates...
	I0914 00:02:49.229783  228939 config.go:182] Loaded profile config "multinode-136805": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 00:02:49.229801  228939 status.go:255] checking status of multinode-136805 ...
	I0914 00:02:49.230287  228939 cli_runner.go:164] Run: docker container inspect multinode-136805 --format={{.State.Status}}
	I0914 00:02:49.248380  228939 status.go:330] multinode-136805 host status = "Stopped" (err=<nil>)
	I0914 00:02:49.248402  228939 status.go:343] host is not running, skipping remaining checks
	I0914 00:02:49.248408  228939 status.go:257] multinode-136805 status: &{Name:multinode-136805 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:02:49.248439  228939 status.go:255] checking status of multinode-136805-m02 ...
	I0914 00:02:49.248728  228939 cli_runner.go:164] Run: docker container inspect multinode-136805-m02 --format={{.State.Status}}
	I0914 00:02:49.266819  228939 status.go:330] multinode-136805-m02 host status = "Stopped" (err=<nil>)
	I0914 00:02:49.266845  228939 status.go:343] host is not running, skipping remaining checks
	I0914 00:02:49.266853  228939 status.go:257] multinode-136805-m02 status: &{Name:multinode-136805-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136805 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0914 00:03:05.347527   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136805 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.571874308s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136805 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136805
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136805-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-136805-m02 --driver=docker  --container-runtime=docker: exit status 14 (63.579007ms)

                                                
                                                
-- stdout --
	* [multinode-136805-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-136805-m02' is duplicated with machine name 'multinode-136805-m02' in profile 'multinode-136805'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136805-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136805-m03 --driver=docker  --container-runtime=docker: (21.630262197s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-136805
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-136805: exit status 80 (271.066744ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-136805 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-136805-m03 already exists in multinode-136805-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-136805-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-136805-m03: (2.027122278s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.04s)

                                                
                                    
x
+
TestPreload (138.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-379744 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0914 00:04:28.409796   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:05:26.674400   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-379744 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m30.033330553s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-379744 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-379744 image pull gcr.io/k8s-minikube/busybox: (2.300856935s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-379744
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-379744: (10.745473692s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-379744 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-379744 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (32.707392105s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-379744 image list
helpers_test.go:175: Cleaning up "test-preload-379744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-379744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-379744: (2.16322853s)
--- PASS: TestPreload (138.16s)

                                                
                                    
x
+
TestScheduledStopUnix (94.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-589294 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-589294 --memory=2048 --driver=docker  --container-runtime=docker: (21.938058466s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589294 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-589294 -n scheduled-stop-589294
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589294 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589294 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589294 -n scheduled-stop-589294
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-589294
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589294 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-589294
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-589294: exit status 7 (66.866729ms)

                                                
                                                
-- stdout --
	scheduled-stop-589294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589294 -n scheduled-stop-589294
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589294 -n scheduled-stop-589294: exit status 7 (62.789088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-589294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-589294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-589294: (1.648749389s)
--- PASS: TestScheduledStopUnix (94.84s)

                                                
                                    
x
+
TestSkaffold (105.67s)

                                                
                                                
=== RUN   TestSkaffold
E0914 00:08:05.347437   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2345654487 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-673987 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-673987 --memory=2600 --driver=docker  --container-runtime=docker: (21.341970738s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2345654487 run --minikube-profile skaffold-673987 --kube-context skaffold-673987 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2345654487 run --minikube-profile skaffold-673987 --kube-context skaffold-673987 --status-check=true --port-forward=false --interactive=false: (1m7.138291295s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-798b596b5-5qhp8" [5192bd3e-e837-4597-8085-c6f7b9ac5f03] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003412155s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-99d8ffd5c-7tt2v" [8e794c5b-e7fb-4b18-b9d6-5a8b14ee7d23] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00472121s
helpers_test.go:175: Cleaning up "skaffold-673987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-673987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-673987: (2.823882814s)
--- PASS: TestSkaffold (105.67s)

                                                
                                    
x
+
TestInsufficientStorage (10.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-589111 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-589111 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.947705872s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc243b79-24d0-4fe7-abe4-abe34d3be03e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-589111] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9161d37b-d519-4673-85e8-d647de4ba58e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"5af02f43-b014-4606-8f3b-417b4bbc6c01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2b352ce8-5dc8-4b02-9843-8ff4ca926d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig"}}
	{"specversion":"1.0","id":"9805ecb8-a719-4582-8fec-76e0cc9279da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube"}}
	{"specversion":"1.0","id":"13d3d655-ffed-4b42-a7a0-7741ceba71af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"80e1f483-942e-4505-a6bb-37f16dab8b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02138ab7-601c-45f7-8363-e8ba1e61297e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"424b1b80-b762-4663-9d16-f50fdbf613df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d435e620-7d6a-45b2-bc72-fff92e1bf5a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e844881c-58c0-4827-ac86-022a144cfcbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aa955662-6e34-4f40-8846-382ccf6df94b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-589111\" primary control-plane node in \"insufficient-storage-589111\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2628a099-e1be-4fde-a5bf-c00fac2396b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726243947-19640 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6749973-c0a1-449d-824d-a4c1173c6215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e35c574-0fa7-417c-9e6d-ac7602fb094d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589111 --output=json --layout=cluster: exit status 7 (264.464752ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589111","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589111","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:09:58.187108  269335 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-589111" does not appear in /home/jenkins/minikube-integration/19640-5233/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589111 --output=json --layout=cluster: exit status 7 (265.676638ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589111","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589111","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:09:58.453170  269432 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-589111" does not appear in /home/jenkins/minikube-integration/19640-5233/kubeconfig
	E0914 00:09:58.463401  269432 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/insufficient-storage-589111/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-589111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-589111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-589111: (1.664300212s)
--- PASS: TestInsufficientStorage (10.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1231252193 start -p running-upgrade-836375 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1231252193 start -p running-upgrade-836375 --memory=2200 --vm-driver=docker  --container-runtime=docker: (28.759935409s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-836375 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:14:36.144471   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.150912   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.162514   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.184625   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.226741   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.308848   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.470731   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:36.792202   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:37.433511   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:38.715559   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:41.277080   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:46.398752   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:56.640292   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-836375 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.612706659s)
helpers_test.go:175: Cleaning up "running-upgrade-836375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-836375
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-836375: (2.246067469s)
--- PASS: TestRunningBinaryUpgrade (60.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (338.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.756047636s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-319117
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-319117: (1.1969923s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-319117 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-319117 status --format={{.Host}}: exit status 7 (67.329209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:13:05.347691   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m37.131012639s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-319117 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (74.550768ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-319117] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-319117
	    minikube start -p kubernetes-upgrade-319117 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3191172 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-319117 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319117 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.981030202s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-319117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-319117
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-319117: (2.487472115s)
--- PASS: TestKubernetesUpgrade (338.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3251321803 start -p missing-upgrade-035288 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3251321803 start -p missing-upgrade-035288 --memory=2200 --driver=docker  --container-runtime=docker: (1m46.941000855s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-035288
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-035288: (10.366456506s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-035288
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-035288 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-035288 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.704273172s)
helpers_test.go:175: Cleaning up "missing-upgrade-035288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-035288
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-035288: (2.206678251s)
--- PASS: TestMissingContainerUpgrade (177.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (85.470737ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-955284] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-955284 --driver=docker  --container-runtime=docker
E0914 00:10:26.674488   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-955284 --driver=docker  --container-runtime=docker: (36.776887038s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-955284 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --driver=docker  --container-runtime=docker: (15.403986913s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-955284 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-955284 status -o json: exit status 2 (301.276824ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-955284","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-955284
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-955284: (1.743319451s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-955284 --no-kubernetes --driver=docker  --container-runtime=docker: (10.112527527s)
--- PASS: TestNoKubernetes/serial/Start (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-955284 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-955284 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.05441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-955284
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-955284: (1.197118309s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-955284 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-955284 --driver=docker  --container-runtime=docker: (8.607559949s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-955284 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-955284 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.700085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1896565681 start -p stopped-upgrade-811775 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1896565681 start -p stopped-upgrade-811775 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m57.093788418s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1896565681 -p stopped-upgrade-811775 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1896565681 -p stopped-upgrade-811775 stop: (10.798757753s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-811775 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-811775 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.570024288s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-811775
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-811775: (1.529406707s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                    
x
+
TestPause/serial/Start (70.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-776051 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-776051 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m10.366379164s)
--- PASS: TestPause/serial/Start (70.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m9.178299964s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0914 00:15:17.121795   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (44.585289278s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-776051 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:15:26.674360   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-776051 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.449748356s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hrd6k" [1422a5a1-ded1-4d0a-8c4f-1bc1f2b9bcc4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004454405s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fkp7m" [d592aa70-9766-44b7-a71f-4853f2cd52cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fkp7m" [d592aa70-9766-44b7-a71f-4853f2cd52cc] Running
E0914 00:15:58.083899   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005241895s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-776051 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-776051 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-776051 --output=json --layout=cluster: exit status 2 (316.330872ms)

                                                
                                                
-- stdout --
	{"Name":"pause-776051","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-776051","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-776051 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-776051 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-776051 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-776051 --alsologtostderr -v=5: (2.191995462s)
--- PASS: TestPause/serial/DeletePaused (2.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-776051
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-776051: exit status 1 (19.773838ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-776051: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (41.636949977s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pr86w" [7408ab18-a6d9-443b-95b4-613cee0c013b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pr86w" [7408ab18-a6d9-443b-95b4-613cee0c013b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004213447s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (39.717326042s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (68.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m8.543809087s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (68.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v7b2q" [794f1246-69ef-43a8-b86d-d69ac8e0dd6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v7b2q" [794f1246-69ef-43a8-b86d-d69ac8e0dd6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003858931s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m8.399476702s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gb4tp" [51de473d-9dea-47a2-8805-e49151cc432b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gb4tp" [51de473d-9dea-47a2-8805-e49151cc432b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003742427s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0914 00:17:20.005265   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m3.219652112s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.78772971s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t9vfk" [af271a72-41ca-48f2-9880-edbc67a166d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t9vfk" [af271a72-41ca-48f2-9880-edbc67a166d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003916399s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rwxfg" [fe995bc6-b618-450c-96a5-8ff5af4d7e01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.012867287s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nfmrd" [351ad5e0-fda7-4ed5-8145-b0e42e1df958] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nfmrd" [351ad5e0-fda7-4ed5-8145-b0e42e1df958] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.014537566s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ttwsx" [cd6c9b31-e488-4215-87ae-507896c4296c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004069253s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (62.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-250366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m2.926638582s)
--- PASS: TestNetworkPlugins/group/false/Start (62.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nfkfh" [ad794f5c-ff7b-40fb-9465-1fb4665010e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nfkfh" [ad794f5c-ff7b-40fb-9465-1fb4665010e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004455216s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2gl9j" [e56c2165-3a9b-4330-8fd9-2e545abb9a2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 00:18:29.743662   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2gl9j" [e56c2165-3a9b-4330-8fd9-2e545abb9a2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004400759s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-112477 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-112477 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m14.418271915s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.66213847s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-117540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-117540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m11.812293113s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-250366 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-250366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h57t6" [d31478cf-d5a5-4287-b57c-f3739e343b6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h57t6" [d31478cf-d5a5-4287-b57c-f3739e343b6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004256542s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-250366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-250366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)
E0914 00:24:46.186443   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:46.820231   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:48.050778   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-842652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:20:03.847225   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-842652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.265212003s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117540 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5be374a5-feb5-4909-93fe-e826d6fb753f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5be374a5-feb5-4909-93fe-e826d6fb753f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00413038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-083726 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c44eb455-fcea-4083-91b1-72fde8517fc5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c44eb455-fcea-4083-91b1-72fde8517fc5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003161174s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-083726 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-117540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-117540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-117540 --alsologtostderr -v=3
E0914 00:20:26.674244   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/addons-794116/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-117540 --alsologtostderr -v=3: (10.682368106s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-083726 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-083726 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-083726 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-083726 --alsologtostderr -v=3: (10.674477319s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117540 -n embed-certs-117540
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117540 -n embed-certs-117540: exit status 7 (71.453998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-117540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-117540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-117540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.768222628s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117540 -n embed-certs-117540
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083726 -n no-preload-083726
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083726 -n no-preload-083726: exit status 7 (147.863673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-083726 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (270.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:20:45.648609   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.655018   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.666651   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.688661   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.731007   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.812628   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:45.974386   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:46.296153   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:46.938219   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:48.219760   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:50.781726   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:55.903420   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m30.313466372s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083726 -n no-preload-083726
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (270.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-112477 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d9749e1-a440-44d8-994b-c6e3d3c8a2cf] Pending
helpers_test.go:344: "busybox" [8d9749e1-a440-44d8-994b-c6e3d3c8a2cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d9749e1-a440-44d8-994b-c6e3d3c8a2cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004202266s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-112477 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-842652 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b4594683-4e0a-4a70-97bf-5800946edd02] Pending
helpers_test.go:344: "busybox" [b4594683-4e0a-4a70-97bf-5800946edd02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 00:21:06.145469   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b4594683-4e0a-4a70-97bf-5800946edd02] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004432645s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-842652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-112477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 00:21:08.411887   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-112477 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-112477 --alsologtostderr -v=3
E0914 00:21:10.238934   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.245316   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.256735   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.278202   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.319598   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.401065   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.562595   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:10.884229   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:11.526384   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-112477 --alsologtostderr -v=3: (10.774206087s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-842652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 00:21:12.808636   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-842652 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-842652 --alsologtostderr -v=3
E0914 00:21:15.370767   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-842652 --alsologtostderr -v=3: (10.700478245s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112477 -n old-k8s-version-112477
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112477 -n old-k8s-version-112477: exit status 7 (68.674044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-112477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (135.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-112477 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0914 00:21:20.492711   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-112477 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m15.159889825s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112477 -n old-k8s-version-112477
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (135.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652: exit status 7 (140.46352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-842652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-842652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:21:26.626853   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:30.734650   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.334721   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.341139   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.352553   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.373980   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.415420   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.496884   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.658417   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:47.980223   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:48.622253   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:49.903661   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:51.217012   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:52.465333   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:21:57.587219   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.190106   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.196554   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.207993   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.229430   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.271006   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.352463   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.514219   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:04.835773   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:05.477451   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:06.759500   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:07.588790   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:07.829450   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:09.321387   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:14.443307   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:24.685325   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:28.311460   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:32.178852   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:45.167173   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.214851   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.221329   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.232876   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.254378   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.295855   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.377342   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.539058   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:51.860764   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:52.503132   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:53.785504   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:56.347643   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:01.469140   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.347360   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/functional-657132/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.358069   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.364510   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.375951   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.397276   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.438775   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.520209   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:05.681874   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:06.003380   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:06.645419   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:07.926865   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:09.273254   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:10.488673   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:11.711229   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:15.610162   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.382116   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.388547   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.399919   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.421332   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.462736   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.544194   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:21.705657   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:22.027390   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:22.669222   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:23.951545   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.246825   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.253237   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.264725   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.286418   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.328618   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.410893   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.573115   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:24.894731   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:25.536000   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:25.851759   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:26.129236   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/bridge-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:26.513369   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:26.817277   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:29.379186   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:29.510696   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:31.635293   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:32.192963   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:34.500974   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-842652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m25.351259559s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-p62x6" [14c547e4-754d-4150-80e6-cc1c82a9e189] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003541657s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-p62x6" [14c547e4-754d-4150-80e6-cc1c82a9e189] Running
E0914 00:23:41.876857   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:23:44.742530   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003644718s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-112477 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-112477 image list --format=json
E0914 00:23:46.333756   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-112477 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112477 -n old-k8s-version-112477
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112477 -n old-k8s-version-112477: exit status 2 (294.572222ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112477 -n old-k8s-version-112477
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112477 -n old-k8s-version-112477: exit status 2 (291.488994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-112477 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112477 -n old-k8s-version-112477
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112477 -n old-k8s-version-112477
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-715819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:23:54.100603   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/auto-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:02.359055   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kindnet-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:05.224512   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/custom-flannel-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:13.154320   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/kubenet-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-715819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (27.623888303s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-715819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-715819 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-715819 --alsologtostderr -v=3: (5.717886307s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-715819 -n newest-cni-715819
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-715819 -n newest-cni-715819: exit status 7 (64.425294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-715819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-715819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:24:26.324512   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.331665   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.343035   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.364439   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.405919   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.487377   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.648849   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:26.970749   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:27.295472   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/calico-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:27.612525   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:28.894474   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:31.195386   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/enable-default-cni-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:31.456653   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:36.143876   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/skaffold-673987/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:24:36.578463   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-715819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (13.853651091s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-715819 -n newest-cni-715819
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-715819 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-715819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-715819 -n newest-cni-715819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-715819 -n newest-cni-715819: exit status 2 (287.2204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-715819 -n newest-cni-715819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-715819 -n newest-cni-715819: exit status 2 (296.883363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-715819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-715819 -n newest-cni-715819
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-715819 -n newest-cni-715819
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hl7sh" [37fc0a91-f432-45db-831e-7df77d8136dc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003400236s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hl7sh" [37fc0a91-f432-45db-831e-7df77d8136dc] Running
E0914 00:25:07.302638   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/false-250366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00326923s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-117540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-117540 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-117540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117540 -n embed-certs-117540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117540 -n embed-certs-117540: exit status 2 (290.489299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117540 -n embed-certs-117540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117540 -n embed-certs-117540: exit status 2 (297.611607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-117540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117540 -n embed-certs-117540
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117540 -n embed-certs-117540
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ld6jb" [3fee182e-608b-4837-8ab9-c4ff2d8a57ab] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00356148s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ld6jb" [3fee182e-608b-4837-8ab9-c4ff2d8a57ab] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003860871s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-083726 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-083726 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-083726 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083726 -n no-preload-083726
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083726 -n no-preload-083726: exit status 2 (278.154522ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083726 -n no-preload-083726
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083726 -n no-preload-083726: exit status 2 (284.146021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-083726 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083726 -n no-preload-083726
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083726 -n no-preload-083726
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-67bwl" [7df50a76-4d0a-4d58-9a08-3b82c67fd4dc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00411189s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-67bwl" [7df50a76-4d0a-4d58-9a08-3b82c67fd4dc] Running
E0914 00:25:58.710853   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:58.717216   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:58.728592   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:58.749998   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:58.792020   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:58.873392   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:59.034884   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:59.356559   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:59.998868   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004263515s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-842652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-842652 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-842652 --alsologtostderr -v=1
E0914 00:26:01.280599   12020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/old-k8s-version-112477/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652: exit status 2 (285.092572ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652: exit status 2 (283.403508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-842652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842652 -n default-k8s-diff-port-842652
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.38s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-250366 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-250366" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-047925
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-5233/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:10:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-docker-924434
contexts:
- context:
cluster: cert-expiration-047925
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-047925
name: cert-expiration-047925
- context:
cluster: offline-docker-924434
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:10:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-924434
name: offline-docker-924434
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-047925
user:
client-certificate: /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/cert-expiration-047925/client.crt
client-key: /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/cert-expiration-047925/client.key
- name: offline-docker-924434
user:
client-certificate: /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/offline-docker-924434/client.crt
client-key: /home/jenkins/minikube-integration/19640-5233/.minikube/profiles/offline-docker-924434/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-250366

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-250366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-250366"

                                                
                                                
----------------------- debugLogs end: cilium-250366 [took: 3.277855976s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-250366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-250366
--- SKIP: TestNetworkPlugins/group/cilium (3.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-775820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-775820
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard